ARMCI DASSERT fail for CCSD property job


Just Got Here
Hi,

I've been trying to run some CCSD/aug-cc-pVTZ calculations of small molecule dipole and polarizabilities with the LR TCE code. I've been getting ARMCI DASSERT errors when the CCSD step starts. I am running on a GNU/Linux Intel 64 cluster with IB. There are 8 cores per node, 2 gb per core This error happens even when I run with 320 cores (2 gb per core). I've seen similar things reported on the forum in the past without any resolution.

Here is the error message:
tce_ao2e: fast2e=1
half-transformed integrals in memory

2-e (intermediate) file size =     21950317536
2-e (intermediate) file name = ./acet.v2i
(rank:312 hostname:gpc-f146n023 pid:30333):ARMCI DASSERT fail. ../../ga-5-1/armci/src/devices/openib/openib.c:armci_pin_contig_hndl():1142 cond:(memhdl->memhndl!=((void *)0))


The input file is:

memory total 1800 mb noverify

geometry units angstrom
C 1.0726063 1.0777884 0.0000000
C 0.5728195 -0.3643775 0.0000000
O 1.3396375 -1.3132665 0.0000000
C -0.9422727 -0.5534472 0.0000000
H -1.3988705 -0.0695415 0.8935462
H -1.1841905 -1.6352983 0.0000000
H -1.3988705 -0.0695415 -0.8935462
H 0.6932690 1.6245888 0.8934669
H 0.6932690 1.6245888 -0.8934669
H 2.1811854 1.0916309 0.0000000
end


basis spherical
* library aug-cc-pvtz file /scinet/gpc/Applications/NWChem-6.0/data/libraries/
end

tce
 freeze atomic
ccsd
end

set tce:lineresp T
task tce energy

Note that I've tried different 2eorb options without any success. aug-cc-pVDZ jobs complete without any issues.

My compile options were:
module purge
module load intel/12.1.3
module load openmpi/1.4.4-intel-v12.1

export LARGE_FILES=TRUE
export NWCHEM_TOP=/home/c/crowley/crowley/programs/nwchem-6.1.1-src/
export NWCHEM_MODULES=all
export NWCHEM_TARGET=LINUX64
export CC=icc
export FC=ifort
export BLASOPT="-L/scinet/gpc/intel/ics/composer_xe_2011_sp1.9.293/mkl/lib/intel64 -lmkl_intel_ilp64 -lmkl_sequential -lmkl_core -lpthread -lm"
export USE_MPI=y
export USE_MPIF=y
export USE_MPIF4=y
export MPI_LOC=/scinet/gpc/mpi/openmpi/1.4.4-intel-v12.1/
export MPI_LIB="$MPI_LOC/lib"
export MPI_INCLUDE="$MPI_LOC/include"
export LIBMPI="-lmpi_f90 -lmpi_f77 -lmpi -lopen-rte -lopen-pal -lrdmacm -libverbs -ltorque -ldl -Wl,--export-dynamic -lnsl -lutil"
export ARMCI_NETWORK=OPENIB
export MSG_COMMS=MPI
export TCGRSH=/usr/bin/ssh
export IB_HOME=/usr
export IB_LIB=$IB_HOME/lib64
export IB_INCLUDE=$IB_HOME/include

Any suggestions would be welcome.

Thanks,
Chris

Clicked A Few Times
LRCC
Hi Chris,
it is always safer to use the explicit memory allocation.
Assuming you gave 2GB available per core please use the following memory specification:

memory stack 1000 mb heap 50 mb global 800 mb noverify

I also presume that your molecule is of closed-shell type. In such a case please use the more efficient form of the 4-index transformation (which is valid only for the RHF/ROHF references).
In order to enable it please add the following lines to your tce input group:

2eorb
2emet 13
attilesize 40

make sure that tilesize is smaller than the attilesize. By enabling this 4-index transfromation you will
save a lot of memory.

Best,
Karol

Gets Around
Quote:Cnrowley Jul 14th 11:29 pm

(rank:312 hostname:gpc-f146n023 pid:30333):ARMCI DASSERT fail. ../../ga-5-1/armci/src/devices/openib/openib.c:armci_pin_contig_hndl():1142 cond:(memhdl->memhndl!=((void *)0))
Chris


I solve all my ARMCI segfault problems with http://wiki.mpich.org/armci-mpi/index.php/NWChem.

Best,

Jeff


Forum >> NWChem's corner >> Running NWChem