Software

Information about compilers, MPI and other software for currently operating clusters is available on clusters’ separate pages:

Archive documentation:

Applications

Chemistry

Application

Košice

Žilina

Gromacs

2022

LAMMPS

sep-2021

sep-2021

NAMD

2.14

2.14

NWChem

6.8

6.5

Quantum ESPRESSO

6.7

Chemistry

Gromacs

GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.

It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.

Lammps is installed on Žilina cluster. You can find installed versions in /gpfs/home/freeware/LINUX/gromacs-2022 folder. Example job execution script and sample input file are in /gpfs/home/freeware/LINUX/EXAMPLE_JOBS/gromacs/2022 folder.

Running

Gromacs is compiled for parallel execution. Example run script:

#!/bin/bash
#@ job_type = MPICH
#@ job_name = gromacs
#@ class = cluster_short
#@ error = job.err
#@ output = job.out
#@ network.MPI = sn_all,not_shared,US
#@ node = 1
#@ tasks_per_node = 32
#@ node_usage = shared
#@ queue

module()
{
    eval `/usr/bin/modulecmd bash $*`
}

module load mpi/mpich-3.4.1-gnu_8.2
module load gnu/gcc-8.2.0
module load python/python-3.7.2

source /gpfs/home/freeware/LINUX/gromacs-2022/bin/GMXRC

$(which mpiexec) -n $LOADL_TOTAL_TASKS -f $LOADL_HOSTFILE -iface ib0 -bind-to core -launcher rsh gmx_mpi mdrun -npme 0 -ntomp 1 -s benchMEM.tpr -cpt 1440 -nsteps 5000 -resetstep 2500 -v -noconfout > mdrun.out 2>&1

Link to complete documentation: https://manual.gromacs.org/documentation/.

LAMMPS

LAMMPS is a classical molecular dynamics (MD) code that models ensembles of particles in a liquid, solid, or gaseous state. It can model atomic, polymeric, biological, solid-state (metals, ceramics, oxides), granular, coarse-grained, or macroscopic systems using a variety of interatomic potentials (force fields) and boundary conditions. It can model 2d or 3d systems with only a few particles up to millions or billions.

Lammps is installed on Košice cluster and on Žilina cluster.

Kosice:

You can find installed versions in /lustre/home/freeware/lammps folder. Example job execution script and sample input file are in /lustre/home/freeware/EXAMPLE_JOBS/lammps folder.

Zilina:

You can find installed versions in /gpfs/home/freeware/LINUX/lammps folder. Example job execution script and sample input file are in /gpfs/home/freeware/LINUX/EXAMPLE_JOBS/lammps folder.

Running

Lammps is compiled for parallel execution. Example run script for Košice cluster:

#!/bin/bash
#SBATCH -p short
#SBATCH -J test # Job name
#SBATCH -o job.%j.out # Name of stdout output file (%j expands to %jobId)
#SBATCH -N 1 # Total number of nodes requested
#SBATCH -n 12 # Total number of mpi tasks #requested

module purge
module load gnu7/7.3.0 openmpi3/3.1.0 ohpc
module load openblas/0.2.20 fftw/3.3.8 python/3.8.12
export OMP_NUM_THREADS=1

LAMMPS_DIR=/lustre/home/freeware/lammps/sep2021

$(which mpirun) -x LD_LIBRARY_PATH $LAMMPS_DIR/bin/lmp -in input

Lammps is compiled for parallel execution. Example run script for Žilina cluster:

#!/bin/bash
#@ job_type = parallel
#@ job_name = lammps
#@ class = cluster_short
#@ error = $(job_name).err
#@ output = $(job_name).out
#@ node = 1
#@ tasks_per_node = 32
#@ node_usage = shared
#@ queue

source /etc/profile.d/modules.sh
module load gnu/gcc-8.2.0 mpi/mpich-3.4.1-gnu_8.2
export OMP_NUM_THREADS=1

LAMMPS_DIR=/gpfs/home/freeware/LINUX/lammps/20Sep2021/

$(which mpirun) -n 32 $LAMMPS_DIR/bin/lmp -in input

Link to complete documentation: https://docs.lammps.org/.

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. Based on Charm++ parallel objects, NAMD scales to hundreds of cores for typical simulations and beyond 500,000 cores for the largest simulations.

NAMD is installed on Žilina and Košice clusters.

Košice:

You can find installed versions in /lustre/home/freeware/namd/NAMD_2.14_Linux-x86_64-verbs/ folder. Example job execution script and sample input file are in /lustre/home/freeware/EXAMPLE_JOBS/namd folder.

Žilina:

You can find installed versions in /gpfs/home/freeware/LINUX/namd/2.14/Linux-POWER-g++ folder. Example job execution script and sample input file are in /gpfs/home/freeware/LINUX/EXAMPLE_JOBS/namd folder.

Running

NAMD is compiled for parallel execution. Example run script for Žilina cluster:

#!/bin/bash
#@ job_type = parallel
#@ job_name = namd
#@ class = cluster_short
#@ error = job.err
#@ output = job.out
#@ network.MPI = sn_all,not_shared,US
#@ node = 1
#@ tasks_per_node = 32
#@ rset = RSET_MCM_AFFINITY
#@ mcm_affinity_options = mcm_mem_req mcm_distribute mcm_sni_none
#@ task_affinity = core(1)
#@ queue
source /etc/profile.d/modules.sh
module load mpi/mpich-3.4.1-gnu_8.2 gnu/gcc-8.2.0
$(which mpirun) -f $LOADL_HOSTFILE -iface ib0 -bind-to core:1 /gpfs/home/freeware/LINUX/namd/2.14/Linux-POWER-g++ namd2 apoa1.namd

Example run script for Košice cluster:

#!/bin/bash
#SBATCH --job-name=namd
#SBATCH --output=job.out
#SBATCH --partition=short
#SBATCH -N 1
#SBATCH --ntasks-per-node=12
###SBATCH --constraint=k20m
#SBATCH --exclude=comp[47-56]

module purge
module load prun/1.3
module load gnu7/7.3.0
module load openmpi3/3.1.0
module load ohpc

scontrol show hostnames > hostlist
sed -i 's/^/host\ /' ./hostlist
sleep 3
cat ./hostlist

export OMP_NUM_THREADS=1
time /lustre/home/freeware/namd/NAMD_2.14_Linux-x86_64-verbs/charmrun +p24 ++nodelist ./hostlist ++numHosts 2 /lustre/home/freeware/namd/NAMD_2.14_Linux-x86_64-verbs/namd2 apoa1.namd

Link to complete documentation: https://www.ks.uiuc.edu/Research/namd/2.14/ug/.

NWChem

NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters.

NWChem is installed on Žilina and Košice clusters.

Košice:

You can find installed versions in /lustre/home/freeware/nwchem folder. Example job execution script and sample input file are in /lustre/home/freeware/EXAMPLE_JOBS/nwchem/6.8 folder.

Žilina:

You can find installed versions in /gpfs/home/freeware/LINUX/nwchem-6.5 folder. Example job execution script and sample input file are in /gpfs/home/freeware/LINUX/EXAMPLE_JOBS/nwchem folder.

Running

NWChem is compiled for parallel execution. Example run script for Košice cluster:

#!/bin/bash
##SBATCH -A #account
#SBATCH -p short #partition
#SBATCH -J test # Job name
#SBATCH -o job.%j.out # Name of stdout output file (%j expands to %jobId)
#SBATCH -N 1 # Total number of nodes requested
#SBATCH -n 8 # Total number of mpi tasks #requested
#SBATCH -t 00:30:00 # Run time (hh:mm:ss) - 1.5 hours

module purge
module load gnu7/7.3.0 openmpi3/3.1.0 ohpc
module load openblas/0.2.20 fftw/3.3.8 python/3.8.12
export OMP_NUM_THREADS=1

$(which mpirun) -x LD_LIBRARY_PATH /lustre/home/freeware/nwchem/6.8/bin/LINUX64/nwchem h2o.nw

Example run script for Žilina cluster:

#@ job_type = parallel
#@ job_name = nwchem
#@ class = cluster_short
#@ error = job.err
#@ output = job.out
#@ node = 1
#@ tasks_per_node = 32
#@ queue

export project=n2
nwchem_dir=/gpfs/home/freeware/LINUX/nwchem-6.5/bin/LINUX64
export PATH=/gpfs/home/utils/LINUX/mpich/mpich-3.1.3/bin/:$PATH:$nwchem_dir

mpiexec -hostfile $LOADL_HOSTFILE -launcher rsh nwchem $project.nw

Link to complete documentation: https://github.com/nwchemgit/nwchem/wiki.

Quantum Espresso

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. More information on https://www.quantum-espresso.org.

Quantum ESPRESSO is installed is installed on Košice cluster. You can find installed versions in /lustre/home/freeware/qe folder. Example job execution script and sample input file are in /gpfs/home/freeware/LINUX/EXAMPLE_JOBS/qe folder.

Running

Quantum Espresso is compiled for parallel execution. Example run script for Košice cluster:

#!/bin/bash
##SBATCH -A #account
#SBATCH -p short #partition
#SBATCH -J test # Job name
#SBATCH -o job.%j.out # Name of stdout output file (%j expands to %jobId)
#SBATCH -N 1 # Total number of nodes requested
#SBATCH -n 12 # Total number of mpi tasks #requested
#SBATCH -t 00:30:00 # Run time (hh:mm:ss) - 1.5 hours


module purge
module load prun/1.3
module load gnu7/7.3.0
module load openmpi3/3.1.0
module load ohpc

mpirun /lustre/home/freeware/qe/6.7/bin/pw.x -in $PWD/test_1.in &> $PWD/test_1.out

Link to complete documentation: https://www.quantum-espresso.org/documentation/.