NWChem CPHF Module
------------------
scftype = RHF
nclosed = 195
nopen = 0
variables = 178425
# of vectors = 354
tolerance = 0.10D-03
level shift = 0.00D+00
max iterations = 50
max subspace = 3540
SCF residual: 7.2974634289027011E-007
Iterative solution of linear equations
No. of variables 178425
No. of equations 354
Maximum subspace 3540
Iterations 50
Convergence 1.0D-04
Start time 25036.4
------------------------------------------------------------------------
lkain: failed allocating subspace4 354
------------------------------------------------------------------------
------------------------------------------------------------------------
lkain: failed allocating subspace4 354
lkain: failed allocating subspace4 354
current input line :
151: TASK scf hessian
------------------------------------------------------------------------
------------------------------------------------------------------------
------------------------------------------------------------------------
For more information see the NWChem manual at http://nwchemgit.github.io/index.php/NWChem_Documentation
------------------------------------------------------------------------
------------------------------------------------------------------------
current input line :
------------------------------------------------------------------------
------------------------------------------------------------------------
current input line :
For further details see manual section:
lkain: failed allocating subspace4 354
------------------------------------------------------------------------
------------------------------------------------------------------------
current input line :
0:0:lkain: failed allocating subspace4:: 354
(rank:0 hostname:daniilkh pid:4081):ARMCI DASSERT fail. ../../ga-5-1/armci/src/common/armci.c:ARMCI_Error():208 cond:0
Last System Error Message from Task 0:: No such file or directory
MPI_ABORT was invoked on rank 0 in communicator MPI COMMUNICATOR 4 DUP FROM 0
with errorcode 354.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
0:
------------------------------------------------------------------------
------------------------------------------------------------------------
------------------------------------------------------------------------
For more information see the NWChem manual at http://nwchemgit.github.io/index.php/NWChem_Documentation
For further details see manual section:
1:1:lkain: failed allocating subspace4:: 354
(rank:1 hostname:daniilkh pid:4082):ARMCI DASSERT fail. ../../ga-5-1/armci/src/common/armci.c:ARMCI_Error():208 cond:0
0:
------------------------------------------------------------------------
------------------------------------------------------------------------
------------------------------------------------------------------------
For more information see the NWChem manual at http://nwchemgit.github.io/index.php/NWChem_Documentation
For further details see manual section:
2:2:lkain: failed allocating subspace4:: 354
(rank:2 hostname:daniilkh pid:4083):ARMCI DASSERT fail. ../../ga-5-1/armci/src/common/armci.c:ARMCI_Error():208 cond:0
0:
------------------------------------------------------------------------
------------------------------------------------------------------------
------------------------------------------------------------------------
For more information see the NWChem manual at http://nwchemgit.github.io/index.php/NWChem_Documentation
For further details see manual section:
3:3:lkain: failed allocating subspace4:: 354
(rank:3 hostname:daniilkh pid:4084):ARMCI DASSERT fail. ../../ga-5-1/armci/src/common/armci.c:ARMCI_Error():208 cond:0
Last System Error Message from Task 1:: No such file or directory
Last System Error Message from Task 2:: No such file or directory
Last System Error Message from Task 3:: No such file or directory
mpirun has exited due to process rank 2 with PID 4083 on
node daniilkh exiting improperly. There are two reasons this could occur:
1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.
2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"
This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
[daniilkh:04080] 3 more processes have sent help message help-mpi-api.txt / mpi-abort
[daniilkh:04080] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages