Dear nwchem users,
I'm currently trying to get a reference time of execution for a benchmark
My problem is that the execution time show lot of variations, more than 12%. The main reason comes from the number of steps it takes to converge. The test case is :
START XXXXXX
title "XXXXXXXX"
charge 0
memory total 1800 MB
scratch_dir XXXXXXX
permanent_dir XXXXXXXXXXX
geometry units angstroms print xyz noautoz
XXXXXXXXXXX
basis
H XXXXXX
C XXXXXX
end
driver
maxiter 100
end
dft
xc b3lyp
iterations 500
semidirect memsize 2900000 filesize 2400000
end
task scf optimize
that is scf optimisation with DFT in a semi-direct method.
Sometimes it takes 20 steps and other times 21,22 or 23 steps. I have already looked that it did not come from an IO bottleneck. It also seems to not come from the mathematical library used (MKL).
I have looked at the output file in this section :
-------------------
Energy Minimization
-------------------
Using diagonal initial Hessian
--------
Step 0
--------
............................................
............................................
Starting SCF solution at 26.0s
----------------------------------------------
Quadratically convergent ROHF
Convergence threshold : 1.000E-04
Maximum no. of iterations : 30
Final Fock-matrix accuracy: 1.000E-07
----------------------------------------------
iter energy gnorm gmax time
----- ------------------- --------- --------- --------
1 -7735.0905221208 4.19D+00 2.26D-01 29.4
.............................
............................
for different execution. The first value " 1 -7735.0905221208" changes from one execution from another even for step 0. Then since it did not begin with the same initial value, the next steps are also differents. One comment : before beginning this minimisation, file have been stored on disks (and I suppose read).
I suspect that this initial value differs because of the semi-direct method : the files stored on disk are in 32 bits precision and when the code reads again those files the initial value differs from one execution to another. So I might expect 7 to 8 significant digits, isn't it ? (it is what I see). In the documentation, it is said : with appropriate treatment for large values to retain precision.
Do I have to add a parameter in the input to activate it ?
I have test to put a threshold for SCF to 1e-10 to cause a file storing in 64 bits and I have seen that this initial value is the same between different executions which seem to confirm the assumption.
If it is the case, I did not understand the convergence criteria : between each global step of minimization, files are stored and then read in 32 bits, so I might expect only 7 to 8 significant digits between the last value of step N-1 and first value of step N. But one of the default convergence criteria is a variation of 1e-06 of the total energy. Since the initial value of a steps seems to have only "4 significant digits after the point", it might explain that the stopping criteria is attained for different values of the total number of steps.
Does this explanation looks right for you , or am I missing some things ?
It is the first time I'm running NWCHEM, so please tell me if I'm completely wrong since the beginning.
Is it possible to ask for a 64 bit file storing on disk by keeping the SCF threshold to its default value ?
Or do you see a way to get a number of steps constant for different executions ?
My goal is only to have a reference time.
Thank you in advance for your help,
Pierre-Antoine
|