Installation gromacs

Dear support team-
we'd like to install gromacs to run some molecular dynamics simulations (we have an account and a open project on the cluster).
the version installed via conda seems to not be optimal for use on the cluster. Can we compile gromacs in other ways?
thanks

Hello Cyril,

It's installed: https://gitlab.com/ifb-elixirfr/cluster/tools/-/merge_requests/180

module load gromacs/2020.2

bonjour. merci !

Dear support Team-
The current installation of gromacs was compiled using threads-mpi
protocol, which is typically meant for multi-core workstations and does
not handle parralelisation of jobs across multiple nodes (our application
requires running jobs on ~3-6 nodes). Is it possible to install an MPI
library and compile gromacs with it? Gromacs website says "The GROMACS
team recommends OpenMPI version 1.6 (or higher), MPICH version 1.4.1 (or
higher), or your hardware vendor’s MPI installation. "
Thank you very much-
Cyril

Dear support Team-
It seems that the current installation of gromacs you provided (!180) was compiled using threads-mpi protocol, which is typically meant for multi-core workstations and does not handle parralelisation of jobs across multiple nodes (our application requires running jobs on ~3-6 nodes). Is it possible to install an MPI library and compile gromacs with it? Gromacs website says "The GROMACS team recommends OpenMPI version 1.6 (or higher), MPICH version 1.4.1 (or higher), or your hardware vendor’s MPI installation. "
Thank you very much-
Cyril

Dear Cyril,

You're right (I saw your previous message: Installation gromacs)

But it's not straight forward to install a version of Gromacs with another MPI library (we use Conda and parameters are predefined).
So it might take some time.

Is it blocking ?

Thanks !
Yes, for now we can only use one nod and get only 10 ns of simulated time per day on a relatively small system (target is ideally 500-1000 ns) so it would really help to be able to parallelise things.

not sure it helps but here some info i could find:
" If you wish to run in parallel on multiple machines across a network, you will need to have

  • an MPI library installed that supports the MPI 1.3 standard, and
  • wrapper compilers that will compile code using that library.

To compile with MPI set your compiler to the normal (non-MPI) compiler and add -DGMX_MPI=on to the cmake options. It is possible to set the compiler to the MPI compiler wrapper but it is neither necessary nor recommended.

The GROMACS team recommends OpenMPI version 1.6 (or higher), MPICH version 1.4.1 (or higher), or your hardware vendor’s MPI installation. The most recent version of either of these is likely to be the best. More specialized networks might depend on accelerations only available in the vendor’s library. LAM-MPI might work, but since it has been deprecated for years, it is not supported.

For example, depending on your actual MPI library, use cmake -DCMAKE_C_COMPILER=mpicc -DCMAKE_CXX_COMPILER=mpicxx -DGMX_MPI=on .

ALSO: there seems to be openmpi in anaconda:

openmpi 4.0.4 hdf1f1ad_0 conda-forge

so installing this as a module could be enough - I could then try to
compile gromacs on my own.

what do you think?
thanks

Yes, let's try this.

OpenMPI (4.0.4) installation is in progress:

great ! many thanks!

OpenMPI is now available.

As usual:

module load openmpi/4.0.4

Thanks to @Francois for the review.

Thanks David and Francois.
cheers

Dear Support Team

We tried compiling gromacs by

  1. installing cmake 3.14.0 from conda

  2. following official install instructions with the settings:

cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_MPI=on -DGMX_BUILD_MDRUN_ONLY=ON -DCMAKE_INSTALL_PREFIX=/shared/home/chanus/bin/gromacs-2020.2_node_build -DCMAKE_C_COMPILER=mpicc -DCMAKE_CXX_COMPILER=mpicxx -DGMX_DOUBLE=off

this worked and yielded operating gromacs compiled on the head node. We were able to run a sample job on that node with several mpi tasks and it worked.

It failed when submitted on the node:

mpirun noticed that process rank 3 with PID 114066 on node cpu-node-82 exited on signal 4 (Illegal instruction).

also trying to log in to the nodes confirmed the suspicion that the architecture of the head node is different from the rest and compiled binary does not work.

  1. We tried compiling gromacs directly on the node. Cmake worked fine, however we were unable to run mpicc (openMPI compiler) which was likely also compiled for the head node. Test compilation without mpicc failed because of outdated gcc 4.8.5 (it only works if one selects mpicc on the head node).

We are therefore stuck :frowning: :frowning:

If the architecture of all the production nodes is the same, what might work would be to install an openmpi compiler (mpicc and mpixx, both in the openmpi package) on one of the production nodes. Then the compilation of gromacs might be possible on that node and we would just use that binary for all jobs.

If this does not work (or is impossible), here is a link to gromacs installation guide which contains a section on “configuring with CMake”, how to compile on the head node but intended to run on different architecture nodes.

http://manual.gromacs.org/documentation/current/install-guide/index.html

This requires a bit more specific knowledge of cmake and details of the cluster, so most likely only you would be able to do that..

Your help would be greatly appreciated-

Cheers

Dear Cyril,

I'm puzzled.
I successfully run a job using openmpi/4.0.4 module with mpirun and salloc/sbatch command but I fail with srun.

Could you tell me how you launch your job ?

I also realized that ompenmpi/4.0.4 module doesn't include compiler like mpicc...
I just added them into the module.
The installation is in progress: https://gitlab.com/ifb-elixirfr/cluster/tools/-/merge_requests/213

Some extra notes:

  • As far as I understand, conda use "precompiled" binaries (sotfware are not compiled on our frontend or compute nodes)
  • Cluster hardware: https://ifb-elixirfr.gitlab.io/cluster/doc/cluster-desc/
    • frontend: DELL R630, 2x Intel Xeon E5-2620v4 (2.1GHz, 8 cores), 128 Go RAM
    • cpu-node-[1,67]: DELL C6320, 2x Intel Xeon E5-2695v3 (2.3GHz, 14 cores), 256GB RAM
    • cpu-node-69: DELL R930, 4x Intel Xeon E7-8860v3 (2.2GHz, 16 cores), 3 TB RAM
    • cpu-node-70,77: DELL C6220, 2x Intel Xeon E5-2650v2 (2,6GHz, 8 cores), 128GB RAM
    • cpu-node-78,79,82,83: DELL C8220, 2x Intel Xeon E5-2650v2 (2,6GHz, 8 cores), 128GB RAM
    • cpu-node-80,81: DELL C8220, 2x Intel Xeon E5-2680v2 (2,6GHz, 8 cores), 256GB RAM

To be continued...

Much appreciated ! thanks for the note/
here's the script used to launch the job:

#!/bin/bash
# Initial working directory:
#SBATCH -D ./
#
#SBATCH -J Nmpi
#
# Requests two GPUs per node
##SBATCH --constraint="gpu"
##SBATCH --gres=gpu:2
# You can run using only one GPU by using the line below.
###SBATCH --gres=gpu:1
# Request 1 nodes
#SBATCH --nodes=1
# Set the number of tasks per node (=MPI ranks)
#SBATCH --ntasks-per-node=4
# Set the number of threads per rank (=OpenMP threads)
#SBATCH --cpus-per-task=7
# Explicitly disable hyperthreading
#SBATCH --ntasks-per-core=1
#
# wall clock limit:
#SBATCH --time=24:00:00

#SBATCH --mem=4000M

module purge
module load openmpi
#module load gromacs/2020.2

#export OMP_NUM_THREADS=7

# Pin OpenMP threads to cores. Use if hyperthreading is DISABLED
export OMP_PLACES="cores"
export mdrun=/shared/home/chanus/bin/gromacs-2020.2_build/bin/mdrun_mpi
# Run gromacs

export runname="production"

mpirun -np 4 $mdrun -o ${runname}.trr -x ${runname}.xtc -g ${runname}.log -e ${runname}.edr -cpi ${runname}.cpt -cpo ${runname}.cpt -c ${runname}.gro -s ${runname}.tpr -ntomp 7

cheers

Cyril,

Module openmpi/4.0.4 now contains mpi compilers (mpicxx, mpifort,...)
Thanks @gseith for the review.

Thank you. lets see if it solves the issue.
cheers

Bonjour,
Est-ce qu'il serait possible d'installer la dernière version de Gromacs (gromacs-2024) avec le support GPU ?
Merci beaucoup,
Bonne journée.