Edoapra,
...Sorry for late reply. After several attempts to compile nwchem with mpich, the admin of linux cluster I am using could compile an executable (in his own directory, /home/...) using the following script.
#!/bin/bash
module purge
module load /share/apps/modulefiles/gcc48 mvapich2-2.2b_intel2013 python2.7
export MKLROOT=/share/apps/intel/composer_xe_2013_sp1.3.174/mkl
export NWCHEM_TOP=/home/ittipat/installer/nwchem-6.6
export NWCHEM_TARGET=LINUX64
export ARMCI_NETWORK=OPENIB
export CC=icc
export FC=ifort link
#export USE_ARUR=TRUE
export USE_NOFSCHECK=TRUE
export NWCHEM_FSCHECK=N
export LARGE_FILES=TRUE
export MRCC_THEORY=Y
export EACCSD=Y
export IPCCSD=Y
export CCSDTQ=Y
export CCSDTLR=Y
export NWCHEM_LONG_PATHS=Y
export PYTHONHOME=/usr
#export PYTHON_LIB=
export PYTHONVERSION=2.6
export USE_PYTHONCONFIG=1
#export PYTHONLIBTYPE=so
#export USE_PYTHON64=y
export HAS_BLAS=yes
export BLAS_LOC=${MKLROOT}/lib/intel64
export BLASOPT="-lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lm -ldl"
export BLAS_SIZE='4'
export MAKE=/usr/bin/make
export LD_LIBRARY_PATH="/share/apps/mpi/mvapich2-2.2b_intel2013/lib:/share/apps/python/lib/:/export/apps/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64/"
export USE_MPI=y
export USE_MPIF=y
export USE_MPIF4=y
export MPIEXEC=/share/apps/mpi/mvapich2-2.2b_intel2013/bin/mpiexec
export MPI_LIB=/share/apps/mpi/mvapich2-2.2b_intel2013/lib
export MPI_INCLUDE=/share/apps/mpi/mvapich2-2.2b_intel2013/include
#export LIBMPI="-lmpi -L$MKLROOT/lib/intel64 -lmpifort -rpath --enable-new-dtags"
export LDFLAGS="-L/export/apps/compilers/intel2013/composer_xe_2013_sp1.3.174/compiler/lib/intel64/"
#export LIBMPI="-lmpifort -Wl -rpath --enable-new-dtags -lmpi"
#make nwchem_config NWCHEM_MODULES="all python"
#make -j4 64_to_32
#make -j4
#make nwchem_config NWCHEM_MODULES="all python" 2>&1 | tee ../make_nwchem_config_mpich.log
#make 64_to_32 2>&1 | tee ../make_64_to_32_mpich.log
#make 2>&1 | tee ../makefile.log
$MAKE realclean
$MAKE nwchem_config NWCHEM_MODULES="all python" 2>&1 | tee ../make_nwchem_config_mpich.log
$MAKE 64_to_32 2>&1 | tee ../make_64_to_32_mpich.log
export MAKEOPTS="USE_64TO32=y"
$MAKE ${MAKEOPTS} 2>&1 | tee ../makefile.log
He (admin) then copy an executable binary to the shared directory. Then I tried to run nwchem with a simple calculation using serial command run, they are,
$ module purge
$ module load mvapich2-2.2b_intel2013
$ nwchem optimize-water-molecule.nw
Unfortunately, I got the error message suddenly. So, please allow me to post the error, which related to the program compilation, here.
Job information
---------------
hostname = castor.narit.or.th
program = nwchem
date = Mon Sep 11 15:38:49 2017
compiled = Mon_Sep_11_13:59:20_2017
source = /share/apps/nwchem-6.6
nwchem branch = 6.6
nwchem revision = 27746
ga revision = 10594
input = optimize-water-molecule.nw
prefix = h2o.
data base = ./h2o.db
status = startup
nproc = 1
time left = -1s
Memory information
------------------
heap = 32767994 doubles = 250.0 Mbytes
stack = 32767999 doubles = 250.0 Mbytes
global = 65536000 doubles = 500.0 Mbytes (distinct from heap & stack)
total = 131071993 doubles = 1000.0 Mbytes
verify = yes
hardfail = no
Directory information
---------------------
0 permanent = .
0 scratch = .
NWChem Input Module
-------------------
Water in 6-31g basis set
------------------------
0:Segmentation Violation error, status=: 11
(rank:0 hostname:castor.narit.or.th pid:887):ARMCI DASSERT fail. ../../ga-5-4/armci/src/common/signaltrap.c:SigSegvHandler():315 cond:0
Last System Error Message from Task 0:: Bad address
[unset]: aborting job:
application called MPI_Abort(comm=0x84000001, 11) - process 0
I'm wondering that, would copying the binary file to other place cause such that error about segmentation violation? Do you have any suggestions to overcome this kind of error?
Thanks in advance.
Cheers,
Rangsiman
|