top of page

How to Use Gromacs to Simulate the Kinetics of Lysozyme with Cloud HPC

A hands-on tutorial about how to boost the efficiency of kinetics simulation of lysozyme using Gromacs with massive cloud high-performance computing resources.


How to Use Gromacs to Simulate the Kinetics of Lysozyme with Cloud HPC v100 aws gcp oci

Please check out the steps below and follow them one by one on Cloudam HPC platform to see how to efficiently complete the kinetic simulation calculations on Cloudam cloud high-performance computing platform to simulate the structural changes of lysozyme in the aqueous phase.


Lysozyme in water


I. Structure Process


Lysozyme, also known as muramidase or N-acetylmuramide glycanohydrlase, is an alkaline enzyme that hydrolyzes mucopolysaccharides in bacteria. Lysozyme can also bind directly to negatively charged viral proteins and form complexes with DNA, RNA and apoproteins to inactivate viruses. Here we download the 3D structure of lysozyme protein from the PDB database, with the structure number of 1AKI, or use PyMOL to get the 3D structure directly from the command line. Then, we use PyMOL to check for any missing structure, and eliminate any heteroatoms and water molecules except for the protein, and save the result as a protein.pdb file.


fetch 1AKI

II. Upload the input file to the Cloudam HPC platform


Open the Cloudam HPC platform, register, and login to the console where you can create a new folder named Lysozyme under /home/cloudam/jobs, and drag and drop the prepared protein.pdb file directly to this folder for upload. Uploading files and downloading files can be easily done with sftp.


In addition to the command line, Cloudam offers several submission methods, including visual and GUI desktop. The desktop workstation requires no coding and is more suitable for beginners without little IT expertise. The graphical interface is ideal for single-node and small-scale computing tasks. You may try these two ways of assignment submission as you need (Cloudam offers a $30 Free Trial for every new user, you CANNOT miss it! Click here to get the $30 Free Trial.)


Submit jobs to cluster by slurm in Cloudam Terminal, suitable for IT professionals

Submit Gromacs jobs on Cloudam with beginner-friendly GUI template

Desktop workstation on Cloudam acts like you own PC, but a more powerful one


III. Generate topology files


After ensuring that the protein is free of heteroatoms and that the structure is intact, upload it to the HPC platform, and then we can start the kinetic simulation using Gromacs. Before the simulation, you need to select the version of Gromacs to be applied, and the Gromacs available on the Cloudam HPC platform are as follows:


GROMACS/2021-gromacs-cpu-new
GROMACS/2020-fosscuda-2019b

For the demonstration here, we use the 2020 version of Gromacs for calculation, or you may feel free to choose another version of Gromacs for the simulation (if the version you require is unavailable, you can also contact the technical support for installation). The loading command is as follows:

moduleaddGromacs/2020-fosscuda-2019b

You can check whether the load is successful by typing the gmx or gmx_mpi command. If the information similar to the following returns, the load is successful, and you may proceed with the subsequent simulation:


[cloudam@master jobs]$ gmx_mpi

:-) GROMACS - gmx_mpi, 2021 (-:


GROMACS is written by:

Andrey Alekseenko Emile Apol Rossen Apostolov

Paul Bauer Herman J.C. Berendsen Par Bjelkmar

Christian Blau Viacheslav Bolnykh Kevin Boyd

Aldert van Buuren Rudi van Drunen Anton Feenstra

Gilles Gouaillardet Alan Gray Gerrit Groenhof

Anca Hamuraru Vincent Hindriksen M. Eric Irrgang

Aleksei Iupinov Christoph Junghans Joe Jordan

Dimitrios Karkoulis Peter Kasson Jiri Kraus

Carsten Kutzner Per Larsson Justin A. Lemkul

Viveca Lindahl Magnus Lundborg Erik Marklund

Pascal Merz Pieter Meulenhoff Teemu Murtola

Szilard Pall Sander Pronk Roland Schulz

Michael Shirts Alexey Shvetsov Alfons Sijbers

Peter Tieleman Jon Vincent Teemu Virolainen

Christian Wennberg Maarten Wolf Artem Zhmurov

and the project leaders:

Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

First, we use the pdb2gmx module from GROMACS, and it generates three files: a molecular topology file named topol.top, a position restriction file named posre.itp, and a post-processing structure file named protein_processed.gro. The specific commands are as follows:

gmxpdb2gmx-fprotein.pdb-oprotein_processed.gro-watertip3p-ignh-mergeall

At this time, you will find the following reminder for selecting a force field:


Select the ForceField:


From'/public/software/.local/easybuild/software/GROMACS/2021-gromacs-cpu-new/share/gromacs/top':

1: AMBER03 protein, nucleic AMBER94 (Duan et al., J. Comp. Chem. 24, 1999-2012, 2003)

2: AMBER94 forcefield (Cornell et al., JACS 117, 5179-5197, 1995)

3: AMBER96 protein, nucleic AMBER94 (Kollman et al., Acc. Chem. Res. 29, 461-469, 1996)

4: AMBER99 protein, nucleic AMBER94 (Wang et al., J. Comp. Chem. 21, 1049-1074, 2000)

5: AMBER99SB protein, nucleic AMBER94 (Hornak et al., Proteins 65, 712-725, 2006)

6: AMBER99SB-ILDN protein, nucleic AMBER94 (Lindorff-Larsen et al., Proteins 78, 1950-58, 2010)

7: AMBERGS forcefield (Garcia & Sanbonmatsu, PNAS 99, 2782-2787, 2002)

8: CHARMM27 all-atom forcefield (CHARM22 plus CMAP for proteins)

9: GROMOS96 43a1 forcefield

10: GROMOS96 43a2 forcefield (improved alkane dihedrals)

11: GROMOS96 45a3 forcefield (Schuler JCC 2001221205)

12: GROMOS96 53a5 forcefield (JCC 2004 vol 25 pag 1656)

13: GROMOS96 53a6 forcefield (JCC 2004 vol 25 pag 1656)

14: GROMOS96 54a7 forcefield (Eur. Biophys. J. (2011), 40,, 843-856, DOI: 10.1007/s00249-011-0700-9)

15: OPLS-AA/L all-atom forcefield (2001 aminoacid dihedrals)


Select the AMBER99 force field here to process the protein, type 4 and press Enter; then check that the generated file should contain the following:


posre.itp  protein.pdb  protein_processed.gro  topol.top

IV. Define the unit box and fill it with solvents


Use editconf to define the box with the following command:


gmxeditconf-fprotein_processed.gro-opro_newbox.gro-c-d 1.0-btcubic

The -c indicates that the protein is placed in the center of the box, -d 1.0 suggests that the protein is at least 1 Å from the edge of the box, and -bt indicates that the shape of the box is cubic.


Then use solvate to fill the box with water molecules with the following command:


gmxsolvate-cppro_newbox.gro-csspc216.gro-opro_solv.gro-ptopol.top

The -cp specifies the file for input, the -cs specifies the solvent model file. The file here is the universal three-point solvent model spc216.gro


V. Add ions


After completing the above steps, we obtained the charged solution system with the charge information recorded in the topol.top file. Therefore, we need to add the corresponding ions to neutralize the system. Gromacs does not require any calculation of ion quantity for addition, as Amber does. So it is more user-friendly. It takes two commands to add the ions. The first step requires an ion.mdp parameter file. Here we place all the mdp parameter files in the MDP folder with the following command:


gmx grompp -f../MDP/ions.mdp -c pro_solv.gro -p topol.top -o ions.tpr

The above command results in a file ending with tpr, which provides an atomic-level description for our system.


For the second step, we replace some of the water molecules of the solvent using Na and Cl ions, neutralizing the system charge. The command is as follows:


gmx grompp -f ../MDP/minim.mdp -c pro_solv_ions.gro -p topol.top -oem.tpr

The group number for solvent SOL is 13, so we select 13 and press Enter, as follows:


Reading file ions.tpr, VERSION 2021 (single precision)
Reading file ions.tpr, VERSION 2021 (single precision)
Will try to add 0 NA ions and 8 CL ions.
Select a continuous group of solvent molecules
Group     0 ( System) has 33892 elements
Group     1 ( Protein) has 1960 elements
Group     2 ( Protein-H) has 1001 elements
Group     3 ( C-alpha) has 129 elements
Group     4 ( Backbone) has 387 elements
Group     5 ( MainChain) has 517 elements
Group     6 ( MainChain+Cb) has 634 elements
Group     7 ( MainChain+H) has 646 elements
Group     8 ( SideChain) has 1314 elements
Group     9 ( SideChain-H) has 484 elements
Group    10 ( Prot-Masses) has 1960 elements
Group    11 ( non-Protein) has 31932 elements
Group    12 ( Water) has 31932 elements
Group    13 ( SOL) has 31932 elements
Group    14 ( non-Water) has 1960 elements
Select a group: 13

Contents of the parameter file:


; ions.mdp - used as input into grompp to generate ions.tpr

; Parameters describing what to do, when to stop and what to save

integrator = steep ; Algorithm (steep = steepest descent minimization)

emtol = 1000.0 ; Stop minimization when the maximum force < 1000.0 kJ/mol/nm

emstep = 0.01 ; Minimization step size

nsteps = 50000 ; Maximum number of (minimization) steps to perform


; Parameters describing how to find the neighbors of each atom and how to calculate the interactions

nstlist = 1 ; Frequency to update the neighbor list and long range forces

cutoff-scheme = Verlet ; Buffered neighbor searching

ns_type = grid ; Method to determine neighbor list (simple, grid)

coulombtype = cutoff ; Treatment of long range electrostatic interactions

rcoulomb = 1.0 ; Short-range electrostatic cut-off

rvdw = 1.0 ; Short-range Van der Waals cut-off

pbc = xyz ; Periodic Boundary Conditions in all 3 dimensions


Protein and ion distribution in VMD


VI. Energy minimization of the system


The command is as shown below. This command also requires a mdp parameter file named minim.mdp. The energy minimization process is similar to the ion addition process, using grompp to write the topology and simulation parameters into a binary input file (.tpr), and then using the mdrun module of the GROMACS MD engine to minimize the energy.

gmx grompp -f ../MDP/minim.mdp -c pro_solv_ions.gro -p topol.top -oem.tpr

Contents of the MDP parameter file:

; minim.mdp - used as input into grompp to generate em.tpr
; Parameters describing what to do, when to stop and what to save
integrator  = steep ; Algorithm (steep = steepest descent minimization)
emtol       = 1000.0        ; Stop minimization when the maximum force < 1000.0 kJ/mol/nm
emstep      = 0.01          ; Minimization step size
nsteps      = 50000         ; Maximum number of (minimization) steps to perform

; Parameters describing how to find the neighbors of each atom and how to calculate the interactions
nstlist         = 1         ; Frequency to update the neighbor list and long range forces
cutoff-scheme   = Verlet ; Buffered neighbor searching
ns_type         = grid ; Method to determine neighbor list (simple, grid)
coulombtype     = PME ; Treatment of long range electrostatic interactions
rcoulomb        = 1.0       ; Short-range electrostatic cut-off
rvdw            = 1.0       ; Short-range Van der Waals cut-off
pbc             = xyz ; Periodic Boundary Conditions in all 3 dimensions

Then we type the following command to minimize the energy. This command should be submitted to the compute node for execution and not executed directly in the management node.

gmx mdrun -v -deffnm em

Start to minimize energy, and a running progress prompt will show up on the screen:


Steepest Descents:

Tolerance (Fmax) = 1.00000e+03

Number of steps = 50000

Step= 0, Dmax= 1.0e-02nm, Epot= -2.94025e+05 Fmax= 1.53483e+05, atom= 19880

Step= 1, Dmax= 1.0e-02nm, Epot= -3.12614e+05 Fmax= 6.09036e+04, atom= 19880

Step= 2, Dmax= 1.2e-02nm, Epot= -3.35310e+05 Fmax= 2.40700e+04, atom= 19880

Step= 3, Dmax= 1.4e-02nm, Epot= -3.61996e+05 Fmax= 1.00755e+04, atom= 24188

Step= 4, Dmax= 1.7e-02nm, Epot= -3.87330e+05 Fmax= 4.66750e+03, atom= 983

Step= 5, Dmax= 2.1e-02nm, Epot= -4.08800e+05 Fmax= 1.30036e+04, atom= 547

Step= 6, Dmax= 2.5e-02nm, Epot= -4.13336e+05 Fmax= 2.07112e+04, atom= 547

Step= 7, Dmax= 3.0e-02nm, Epot= -4.17185e+05 Fmax= 1.90699e+04, atom= 547

Step= 8, Dmax= 3.6e-02nm, Epot= -4.18940e+05 Fmax= 2.96782e+04, atom= 547

Step= 9, Dmax= 4.3e-02nm, Epot= -4.21615e+05 Fmax= 2.83858e+04, atom= 547

Step= 11, Dmax= 2.6e-02nm, Epot= -4.26495e+05 Fmax= 6.89298e+03, atom= 547

Step= 12, Dmax= 3.1e-02nm, Epot= -4.27186e+05 Fmax= 3.75623e+04, atom= 1785

Step= 13, Dmax= 3.7e-02nm, Epot= -4.33871e+05 Fmax= 1.81256e+04, atom= 1785

Step= 15, Dmax= 2.2e-02nm, Epot= -4.35813e+05 Fmax= 1.54546e+04, atom= 1785

Step= 16, Dmax= 2.7e-02nm, Epot= -4.37056e+05 Fmax= 2.45185e+04, atom= 1785


Check if energy minimization is valid using the following command:

echo100 | gmx energy -fem.edr -o potential.xvg

We use xmgrace to open the potential.xvg file, and the result is shown below:

Curve of system potential energy with time


VII. NVT Balance


NVT synthesis (particle number, volume, and temperature are constant values) is also known as isothermal isovolumetric synthesis or canonical synthesis. The time required for the NVT equilibrium process depends on the system's composition, but the temperature of the NVT system should reach the expected value and remain essentially the same. If the temperature remains unstable, more time is needed for the equilibrium process. Usually, a time between 50 ps to 100 ps is enough, so in this case, we perform a 100 ps NVT balance. Depending on the machine, it may take a while to complete the process (an hour on a MacBook with dual-core CPU, but a few minutes on Cloudam). The command is as follows:


gmx grompp -f ../MDP/nvt.mdp -cem.gro -r em.gro -p topol.top -o nvt.tpr
gmx mdrun -deffnm nvt

The content of the MDP parameter file is as follows:


title = OPLS Lysozyme NVT equilibration

define = -DPOSRES ; position restrain the protein

; Run parameters

integrator = md ; leap-frog integrator

nsteps = 50000 ; 2 * 50000 = 100 ps

dt = 0.002 ; 2 fs

; Output control

nstxout = 500 ; save coordinates every 1.0 ps

nstvout = 500 ; save velocities every 1.0 ps

nstenergy = 500 ; save energies every 1.0 ps

nstlog = 500 ; update log file every 1.0 ps

; Bond parameters

continuation = no ; first dynamics run

constraint_algorithm = lincs ; holonomic constraints

constraints = h-bonds ; bonds involving H are constrained

lincs_iter = 1 ; accuracy of LINCS

lincs_order = 4 ; also related to accuracy

; Nonbonded settings

cutoff-scheme = Verlet ; Buffered neighbor searching

ns_type = grid ; search neighboring grid cells

nstlist = 10 ; 20 fs, largely irrelevant with Verlet

rcoulomb = 1.0 ; short-range electrostatic cutoff (in nm)

rvdw = 1.0 ; short-range van der Waals cutoff (in nm)

DispCorr = EnerPres ; account for cut-off vdW scheme

; Electrostatics

coulombtype = PME ; Particle Mesh Ewald for long-range electrostatics

pme_order = 4 ; cubic interpolation

fourierspacing = 0.16 ; grid spacing for FFT

; Temperature coupling is on

tcoupl = V-rescale ; modified Berendsen thermostat

tc-grps = Protein Non-Protein ; two coupling groups - more accurate

tau_t = 0.10.1 ; time constant, in ps

ref_t = 300300 ; reference temperature, one for each group, in K

; Pressure coupling is off

pcoupl = no ; no pressure coupling in NVT

; Periodic boundary conditions

pbc = xyz ; 3-D PBC

; Velocity generation

gen_vel = yes ; assign velocities from Maxwell distribution

gen_temp = 300 ; temperature for Maxwell distribution

gen_seed = -1 ; generate a random seed

We may also use the energy to check if the balance is completed. The command is as follows:


echo160 |gmx energy -f nvt.edr -o temperature.xvg

The result is shown below. The temperature is mostly kept at 300K. If there are too many fluctuations, we may increase the number of nstes steps to further balance the system.

System temperature vs. time curve


VIII.NPT Balance


The previous step of NVT balance stabilized the system temperature. Before collecting the data, we also need to stabilize the system pressure (and, therefore stabilize the density). The pressure balance is performed in the NPT system, where the number of particles, pressure, and temperature are kept constant. This system is also known as the isothermal, isobaric system, which is most similar to the test conditions. The command for 100 ps NPT balance is as follows:


gmx grompp -f ../MDP/npt.mdp -c nvt.gro -r nvt.gro -t nvt.cpt -p topol.top -o npt.tpr

gmx mdrun -deffnm npt

The content of the MDP parameter file:

title = OPLS Lysozyme NPT equilibration

define = -DPOSRES ; position restrain the protein

; Run parameters

integrator = md ; leap-frog integrator

nsteps = 50000 ; 2 * 50000 = 100 ps

dt = 0.002 ; 2 fs

; Output control

nstxout = 500 ; save coordinates every 1.0 ps

nstvout = 500 ; save velocities every 1.0 ps

nstenergy = 500 ; save energies every 1.0 ps

nstlog = 500 ; update log file every 1.0 ps

; Bond parameters

continuation = yes ; Restarting after NVT

constraint_algorithm = lincs ; holonomic constraints

constraints = h-bonds ; bonds involving H are constrained

lincs_iter = 1 ; accuracy of LINCS

lincs_order = 4 ; also related to accuracy

; Nonbonded settings

cutoff-scheme = Verlet ; Buffered neighbor searching

ns_type = grid ; search neighboring grid cells

nstlist = 10 ; 20 fs, largely irrelevant with Verlet scheme

rcoulomb = 1.0 ; short-range electrostatic cutoff (in nm)

rvdw = 1.0 ; short-range van der Waals cutoff (in nm)

DispCorr = EnerPres ; account for cut-off vdW scheme

; Electrostatics

coulombtype = PME ; Particle Mesh Ewald for long-range electrostatics

pme_order = 4 ; cubic interpolation

fourierspacing = 0.16 ; grid spacing for FFT

; Temperature coupling is on

tcoupl = V-rescale ; modified Berendsen thermostat

tc-grps = Protein Non-Protein ; two coupling groups - more accurate

tau_t = 0.10.1 ; time constant, in ps

ref_t = 300300 ; reference temperature, one for each group, in K

; Pressure coupling is on

pcoupl = Parrinello-Rahman ; Pressure coupling on in NPT

pcoupltype = isotropic ; uniform scaling of box vectors

tau_p = 2.0 ; time constant, in ps

ref_p = 1.0 ; reference pressure, in bar

compressibility = 4.5e-5 ; isothermal compressibility of water, bar^-1

refcoord_scaling = com

; Periodic boundary conditions

pbc = xyz ; 3-D PBC

; Velocity generation

gen_vel = no ; Velocity generation is off


Command to check balance results:

echo180| gmx energy -f npt.edr -o pressure.xvg

System pressure vs. time curve


IX. Kinetics simulation and final product simulation


With the completion of the two balance phases, the system has been balanced at a desired temperature and pressure. We can now release the location limitation and perform an MD to collect data on a finished product, a process similar to the previous one. When running grompp, we also use the checkpoint file (in this case, it contains pressure coupling information). We are going to perform a 20 ns MD simulation with the following command:


gmx grompp -f ../MDP/md.mdp -c npt.gro -t npt.cpt -p topol.top -o md.tpr

gmx mdrun -v -deffnm md

The content of the MDP parameter file:

title = OPLS Lysozyme NPT equilibration

; Run parameters

integrator = md ; leap-frog integrator

nsteps = 10000000 ; 2 * 10000000 = 20000 ps (20 ns)

dt = 0.002 ; 2 fs

; Output control

nstxout = 0 ; suppress bulky .trr file by specifying

nstvout = 0 ; 0 for output frequency of nstxout,

nstfout = 0 ; nstvout, and nstfout

nstenergy = 5000 ; save energies every 10.0 ps

nstlog = 5000 ; update log file every 10.0 ps

nstxout-compressed = 5000 ; save compressed coordinates every 10.0 ps

compressed-x-grps = System ; save the whole system

; Bond parameters

continuation = yes ; Restarting after NPT

constraint_algorithm = lincs ; holonomic constraints

constraints = h-bonds ; bonds involving H are constrained

lincs_iter = 1 ; accuracy of LINCS

lincs_order = 4 ; also related to accuracy

; Neighborsearching

cutoff-scheme = Verlet ; Buffered neighbor searching

ns_type = grid ; search neighboring grid cells

nstlist = 10 ; 20 fs, largely irrelevant with Verlet scheme

rcoulomb = 1.0 ; short-range electrostatic cutoff (in nm)

rvdw = 1.0 ; short-range van der Waals cutoff (in nm)

; Electrostatics

coulombtype = PME ; Particle Mesh Ewald for long-range electrostatics

pme_order = 4 ; cubic interpolation

fourierspacing = 0.16 ; grid spacing for FFT

; Temperature coupling is on

tcoupl = V-rescale ; modified Berendsen thermostat

tc-grps = Protein Non-Protein ; two coupling groups - more accurate

tau_t = 0.10.1 ; time constant, in ps

ref_t = 300300 ; reference temperature, one for each group, in K

; Pressure coupling is on

pcoupl = Parrinello-Rahman ; Pressure coupling on in NPT

pcoupltype = isotropic ; uniform scaling of box vectors

tau_p = 2.0 ; time constant, in ps

ref_p = 1.0 ; reference pressure, in bar

compressibility = 4.5e-5 ; isothermal compressibility of water, bar^-1

; Periodic boundary conditions

pbc = xyz ; 3-D PBC

; Dispersion correction

DispCorr = EnerPres ; account for cut-off vdW scheme

; Velocity generation

gen_vel = no ; Velocity generation is off


The final simulation will take a long time. However, there are many models of GPU available on Cloudam cloud-HPC platform. For example, the tutorial here selected a V100 graphics card, and it only takes 20 ns to complete the calculation of the above system.

Various GPUs are available on Coudam


After the kinetic simulation, the platform prompted that it took 3 hours and 20 minutes to complete the 20 ns kinetic simulation, which is a magnificent computing power of 143.41 ns per day.


starting mdrun 'HYDROLASE in water'

10000000 steps, 20000.0 ps.

step 80: timed with pme grid 444444, coulomb cutoff 1.000: 140.1 M-cycles

step 160: timed with pme grid 404040, coulomb cutoff 1.091: 135.9 M-cycles

step 240: timed with pme grid 363636, coulomb cutoff 1.212: 147.5 M-cycles

step 320: timed with pme grid 323232, coulomb cutoff 1.363: 175.3 M-cycles

step 400: timed with pme grid 363636, coulomb cutoff 1.212: 145.9 M-cycles

step 480: timed with pme grid 404040, coulomb cutoff 1.091: 138.7 M-cycles

step 560: timed with pme grid 424242, coulomb cutoff 1.039: 141.3 M-cycles

step 640: timed with pme grid 444444, coulomb cutoff 1.000: 143.6 M-cycles

optimal pme grid 404040, coulomb cutoff 1.091

step 9999900, remaining wall clock time: 0 s

Writing final coordinates.

step 10000000, remaining wall clock time: 0 s


NOTE: The GPU has >25% less load than the CPU. This imbalance causes

performance loss.


Core t (s) Wall t (s) (%)

Time: 322366.25912049.3802675.4



3h20:49 (ns/day) (hour/ns) Performance: 143.410 0.167 gcq#450: "Above all, don't fear difficult moments. The best comes from them." (Rita Levi-Montalcini)


The finished kinetic simulation is animated by MD movie tool of Chimera or other visualization software. Some simulation results are shown below, and the green atoms indicate the added ions.



A kinetics simulation animation with Chimera


We use the rms tool to calculate the RMSD fluctuations of protein backbone with time in the system, with the command as follows:


echo44| gmx rms -f md.xtc -s md.tpr -o rmsd

View results with xmgrace

xmgrace-nxyrmsd.xvg

Kinetic analysis of the RMSD variation curve with simulation time


X. Job submission on the Cloudam HPC platform


The above tutorial shows how to perform kinetics simulations with Gromacs in great detail. It is worth mentioning that, the commands in the above tutorial can be done on a standalone machine, and all the above commands may also be submitted as an job script. Command to submit the scripts:


sbatch -N 1 -p g-v100-1 -c 12 md-gromacs.sh

where -N is the number of nodes, and here the entry is 1. -p is the selected PARTITION, and we selected the V100 graphic card (g-v100-1) here.


The md-gromacs.sh script covers all the commands in the above tutorial. According to the Cloudam HPC platform guidelines, you shall add and import the gromacs module at the beginning of the script. If you applied for a GPU, you shall also import the GPU module (lines 1-6). The contents of the script are as follows:


module add GROMACS/2020-fosscuda-2019b

export GMX_GPU_DD_COMMS=true

export GMX_GPU_PME_PP_COMMS=true

export GMX_FORCE_UPDATE_DEFAULT_GPU=true

ntomp="$SLURM_CPUS_PER_TASK"

export OMP_NUM_THREADS=$ntomp


echo4 | gmx pdb2gmx -f protein.pdb -o protein_processed.gro -water tip3p -ignh -merge all


gmx editconf -f protein_processed.gro -o pro_newbox.gro -c -d 1.0 -bt cubic


gmx solvate -cp pro_newbox.gro -cs spc216.gro -o pro_solv.gro -p topol.top


gmx grompp -f ../MDP/ions.mdp -c pro_solv.gro -p topol.top -o ions.tpr


echo13| gmx genion -s ions.tpr -o pro_solv_ions.gro -p topol.top -pname NA -nname CL -neutral


gmx grompp -f ../MDP/minim.mdp -c pro_solv_ions.gro -p topol.top -oem.tpr


gmx mdrun -v -deffnm em


echo100 | gmx energy -fem.edr -o potential.xvg


gmx grompp -f ../MDP/nvt.mdp -cem.gro -r em.gro -p topol.top -o nvt.tpr


gmx mdrun -deffnm nvt



echo160 |gmx energy -f nvt.edr -o temperature.xvg


gmx grompp -f ../MDP/npt.mdp -c nvt.gro -r nvt.gro -t nvt.cpt -p topol.top -o npt.tpr


gmx mdrun -deffnm npt


echo180| gmx energy -f npt.edr -o pressure.xvg


gmx grompp -f ../MDP/md.mdp -c npt.gro -t npt.cpt -p topol.top -o md.tpr


gmx mdrun -v -deffnm md


echo44| gmx rms -f md.xtc -s md.tpr -o rmsd.xvg


That's all for this tutorial. All operations can be done online by logging into Cloudam HPC platform, so you don't need to own a high-performance computer. Meanwhile, the platform spares us the time for tedious tool installation.


About Cloudam


Cloudam HPC is a one-stop HPC platform with 300+ pre-installed to deploy immediately. The system can smartly schedule compute nodes and dynamically schedule the software licenses, optimizing workflow and boosting efficiency for engineers and researchers in Life Sciences, AI/ML, CAE/CFD Simulations, Universities/Colleges, etc.


Partnered with AWS, Azure, Google Cloud, Oracle Cloud, etc., Cloudam powers your R&D with massive cloud resources without queuing.


You can submit jobs by intuitive templates, SLURM, and Windows/Linux workstations. Whether you are a beginner or a professional, you can always find it handy to run and manage your job.


There is a $30 Free Trial for every new user. Why not register and boost your R&D NOW?





bottom of page