Hello,
I am trying to compute a CH4 + H reaction using DFT and the XYG3 functional. I want to take into account the counterpoise correction in my calculation. Unfortunatly, I have some issues running my input with BSSE directive.
I installed NWCHEM 6.5 on my personal server. This is a Linux Mint(based on ubuntu) 64 bit server. All the specifications of my installation can be found via this link in the forum.
Here is my input file :
start nwpath1
geometry geom1 units angstrom
LOAD format xyz xyz1.xyz
END
BASIS
* library aug-cc-pvQZ
bqC library C aug-cc-pvQZ
bqH library H aug-cc-pvQZ
END
set geometry geom1
SCF
doublet
END
DFT
odft
tolerances tight
grid xfine
xc hfexch 0.8033 slater 0.1967 becke88 nonlocal 0.2107 lyp 0.6789 mp2 0.3211
dftmp2 semidirect
direct
mult 2
iterations 100
END
BSSE
mon first 1 2 3 4 5
mon second 6 mult 2
END
task dft energy
#python
#exponent=1
#energy = task_energy('dft')
#nwplotdata = open("nwplotdata",'w')
#nwplotdata.write('%f %f\n' % (exponent , energy))
#end
#task python
First of all, I would like to be sure that the implementation of the XYG3 functional is correct.
According to Zhang et al., XYG3 is a doubly hybrid functional that can be resumed by:
Becke 1988 nonlocal (0.2107) + Slater local (0.1967) +
Hartree-Fock (0.8033) exchange, Lee-Yang-Parr 1988
(0.6789) + MP2 (0.3211) correlation.
I am not very sure that MP2 correlation is correctly implemented.
When I want to run my input. The calculation abruptly stop and I have this message in the log file:
0:pnga_create_config:ga_set_data:wrong dimension specified:Received an Error in Communication
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI COMMUNICATOR 3 DUP FROM 0
with errorcode -1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
Could you please tell me what does it mean?
I do NOT run nwchem in MPI but I have the same error if I am trying to use parallel jobs.
Could you help me please?
In advance, Thank you very much,
Guillaume
|