2 Install NAMD2.13 at Cray-XC40

Click Start Over at the left bottom to start Back to Content


2.1 Run NAMD at Cray-XC40

Your batch job will need to load modules and set environment variables:

module swap PrgEnv-cray PrgEnv-gnu

module load rca

module load craype-hugepages8M

setenv HUGETLB_DEFAULT_PAGE_SIZE 8M

setenv HUGETLB_MORECORE no

To run an SMP build with one process per node on 16 32-core nodes:

aprun -n 16 -r 1 -N 1 -d 31 /path/to/namd2 +ppn 30 +pemap 1-30 +commap 0 <configfile>

or the same with 4 processes per node:

aprun -n 64 -N 4 -d 8 /path/to/namd2 +ppn 7 +pemap 1-7,9-15,17-23,25-31 +commap 0,8,16,24 <configfile>

or non-SMP, leaving one core free for the operating system:

aprun -n 496 -r 1 -N 31 -d 1 /path/to/namd2 +pemap 0-30 <configfile>

The explicit +pemap and +commap settings are necessary to avoid having multiple threads assigned to the same core (or potentially all threads assigned to the same core). If the performance of NAMD running on a single compute node is much worse than comparable non-Cray host then it is very likely that your CPU affinity settings need to be fixed.

2.2 Compile NAMD

Tried every option at NAMD_2.13_Source/charm-6.8.2/src/arch/gni-crayxc. Only gnu works. The following is the building process.

  • tar xzf NAMD_2.13_Source.tar.gz
  • cd NAMD_2.13_Source
  • tar xf charm-6.8.2.tar
  • Build and test the Charm++/Converse library (MPI version):
    • cd charm-6.8.2
    • module swap PrgEnv-cray PrgEnv-gnu
    • module load rca
    • module load craype-hugepages8M
    • ./build charm++ mpi-crayxc smp --with-production
    • cd mpi-crayxc-smp/tests/charm++/megatest
    • make pgm
    • mpiexec -n 4 ./pgm (run as any other MPI program on your cluster)
    • cd ../../../../..
  • Download and install TCL and FFTW libraries: (cd to NAMD_2.13_Source if you’re not already there)
  • Set up build directory and compile:
    • MPI version: ./config Linux-x86_64-g++ --charm-arch mpi-crayxc-smp
    • cd Linux-x86_64-g++
    • gmake -j4 (which should run faster than make)