Hi,
Just a note about compiling NWChem on the UK national supercomputing service ARCHER using Intel compilers in case anyone else tries to do the same. As a first attempt I compiled a basic MPI-TS build with the following settings:
export NWCHEM_TOP=...
export NWCHEM_TARGET=LINUX64
export USE_MPI=y
export USE_MPIF=y
export USE_MPIF4=y
export LIBMPI=" "
export ARMCI_NETWORK=MPI-TS
export USE_64TO32=y
make FC=ftn nwchem_config NWCHEM_MODULES=all &> make_config.log
make 64_to_32 &> make_64to32.log
make FC=ftn &> make.log
This seemed to compile OK but in certain circumstances it gives incorrect results, in particular for the force (as printed in force.dat) on the point charge of the following input:
title "nwchem"
memory 100 mb
print medium
start nwchem
geometry print units bohr nocenter noautoz
symmetry c1 tol 0.0
zn 0.00000000000000 0.00000000000000 0.00000000000000
end
bq units au
force force.dat
0.00000000000000 3.77945332702064 0.00000000000000 2.0000000000
end
basis print
zn library 3-21g
end
charge 0
set scf:converged false
task scf gradient
I traced the error back to the file src/property/hnd_elfcon.F where the standard Intel compiler optimisations cause the loop in the subroutine multi_reduce to give an incorrect result. I tried switching the i and j loops but the errors persisted (albeit with different numbers). However the errors can be avoided by moving hnd_elfcon.o from OBJ_OPTIMIZE to OBJ in src/property/GNUmakefile, as a result of the lower optimisation level used.
I am not sure whether this is an XC30-specific issue or simply a compiler problem. My Intel build on my local machine is not affected, but I am using a different compiler version (13.0.1 vs 14.0.1 on Archer).
NB: the standard NWChem module on ARCHER is compiled with GNU compilers and so is not affected by this problem.
Tom
|