2:43:38 PM PDT - Tue, May 22nd 2012 |
|
Hi Karol,
Right now, the test system has 1095 basis functions, 90 electrons, and 15 atoms ((H2S)5 with aug-cc-pVQZ). However, there are 35 linearly dependent vectors, so effectively 1060 MOs. I am keeping the 42 lowest and 1009 highest orbitals "frozen" (inactive would be the more precise term, I guess), so effectively a [6,9] active space.
The purpose is just to get memory requirements for a big calculation of my actual system, where I can systematically expand the active region up to the resource limits. I was trying calculations without the 2eorb or 2emet options, but was just segfaulting. Using the disk allowed a (H2S)4 trial to finish, and my assumption was that I could estimate global memory requirements for an in-core GA algorithm 13 or 14 job from the disk space used (1.2 TB, for that small (H2S)4 job), without having to set aside massive numbers of nodes only to seg fault once the job finally went through the queue.
So, I intend to switch to a better algorithm once I know the approximate memory requirement, but if I move to 13 or 14 now (in-core, according to the documentation) without knowing the memory requirement, I'll just end up segfaulting, won't I?
|