Compiling NWChem with VampirTrace on Titan


Clicked A Few Times
Hi,

I am trying to get nwchem with VampirTrace (VT) compiled on titan. Below are the steps that I followed (1) without VT and (2) with VT. Since I run into configure errors for (2), I am wondering if someone has tried to compile nwchem with VT before.


(1) nwchem without VT:
======
env variables set:
export NWCHEM_TOP=/lustre/atlas/scratch/jagode/csc103/nwchem-6.3-VT
export NWCHEM_TARGET=LINUX64
export NWCHEM_MODULES=all

export USE_MPI=y
export USE_MPIF=y
export USE_MPIF4=y

$ module swap PrgEnv-pgi PrgEnv-intel
$ make FC=ftn

===> compiles and runs till completion.


(2) nwchem with VT:
======
In addition to the env vars listed above, I also set the following:
export VT_FC=ftn
export VT_CC=cc
export VT_VERBOSE=2

$ module swap PrgEnv-pgi PrgEnv-intel
$ module load vampirtrace/5.14.4-nogpu
$ make FC="vtfort -cpp -DVTRACE"

===> error during compilation (or actually even configure) step:
...
checking size of Fortran INTEGER... no
checking size of Fortran REAL... no
checking size of Fortran DOUBLE PRECISION... no
checking for dtime... yes
checking for etime... yes
checking for vtfort -cpp -DVTRACE flush routine... flush
checking for flag to disable vtfort -cpp -DVTRACE main when linking with C main... -nofor-main
checking for routines to access the command line from Fortran... yes
checking whether Fortran hidden string length convention is after args...
configure: error: f2c string convention is neither after args nor after string
make[1]: *** [build/config.status] Error 1


Is there anything that I missed in order to get nwchem with VT compiled on titan?

Thanks much for any advice,
Heike

Forum Regular
Hi Heike,

There are a few things to consider here:

1. VampirTrace is designed to work with MPI. There is also an interface that can be used to instrument code with VampirTrace (I build that into the Global Arrays years ago) but that interface has changed dramatically in recent years. So the current instrumentation in the Global Arrays no longer works. Therefore you need to build a pure MPI version of the Global Arrays, i.e. set ARMCI_NETWORK to MPI-TS.

2. NWChem uses the name of the compiler to choose a number of compilation flags. So if you do something like "make FC='compiler -a-bunch-of-flags' " then the Makefiles get hopelessly confused.

The way VampirTrace worked years ago was that you compiled your code, then at the link stage you inserted the VampirTrace library before -lpmpi. That way VampirTrace would provide the normal MPI interface and then call the internal MPI routines (this was a mechanism specified in MPI for supporting performance monitoring tools). So assuming that this still works you should be able to compile NWChem as usual, then edit the link-line and relink the code. That should give you a VampirTrace instrumented code which can measure performance at the MPI level.

Huub


Forum >> NWChem's corner >> Compiling NWChem