Solved:Minor bug- cut/paste turns into copy/paste in the ECCE organizer if moving up in the dir hier


Gets Around
Gary,
Another bug to be aware of:

Cut/paste (ctrl+x) in the ECCE organizer works fine in general. However, if you have directory structure like this:

a
|--b
|--c

(i.e. "b" and "c" are subfolders to a), and you want to cut/paste a job from directory "b" to either "a" or "c", it turns into copy/paste. In other words, the original job in "b" doesn't get deleted, only replicated in the destination ("a" or "c").

Cut/paste a job from a higher folder to a lower one (e.g. "a" to "b" or "c") works as expected, as does cut/paste between folders at the same level is fine i.e. between "b" and "c".

I'll see what I can make of it, but these are subtle bugs which are difficult to track down if you're not familiar with the code.

Cheers,
Andy

Gets Around
Hi Andy,

I've just pushed out a new ECCE binary distribution and source code distribution with a fix for this cut/paste bug. I didn't extensively test it, but based on what I changed I expect it to work consistently. It was just a simple switch of a parameter passed to a method from false to true, but it took a few hours to trace it down to the problem code and then discover the parameter that needed to be changed. ECCE is so big with a handful of original developers, that really no one has a really good feel for all of it so I often revert to pretty basic debugging techniques like print/cout statements to see what is going on. If you want to see what I did, do a search on the source code distribution file $ECCE_HOME/src/wxgui/wxtools/WxResourceTreeCtrl.C for "GDB".

Obviously you've noticed that cut/paste works differently than other applications in that the cut doesn't happen until after you've done a paste. If that was an easy change, I'd do it. But that greatly simplifies the implementation in our case so I'm not able to put that time into it.

The other change in this latest ECCE 6.4 distribution is that NWChem is built to use MPI rather than the default TCGMSG for running on multiple processors. We found that the old "parallel" executable distributed with ECCE no longer worked with NWChem 6.1.1 necessitating the switch to MPI. I added the minimum required "mpirun" executables to the ECCE binary distribution so users who have never installed MPI should be able to run the new NWChem build. I'm also discontinuing the distribution of 32-bit binary ECCE distributions ("m32") because of some difficulty in getting a 32-bit NWChem MPI build to work correctly. It's just not possible to continue to provide a little used distribution without an ECCE budget.

I made this fix against my better judgement . I'm thinking you'll just go to the next issue you've found on your secret list and post that here shortly . But, since you're the only person I know who blogs on using ECCE, I'm fine with providing the extra little fixes when they aren't too much work--the other issue reported by another user here on editing atom/residue table entries falls into that latter category unfortunately at the current time.

Cheers,
Gary

Gets Around
Cheers Gary,
I think that's last item for now on my secret list

Again, I really appreciate all the hard work -- maintaining code beyond a certain is not easy, not even if you would've been the sole author.

Thanks for the troubleshooting suggestions -- I hope to be able to contribute eventually (in particular add new basis sets (def2-* are in nwchem but not listed in ECCE) to the basis set tool, add solvation models (pcm, ipcm, cpcm) to the gaussian dialogue etc.).

I've downloaded the source, but I'll wait with upgrading until my current jobs are finishing. I notice that the 32-bit nwchem is still included in the source though so it'll go in v6.5?
./build/3rdparty-dists/nwchem-6.1.1-binary-rhel5-gcc4.1.2-m32.tar.bz2
./build/3rdparty-dists/nwchem-6.1.1-binary-rhel5-gcc4.1.2-m64.tar.bz2

Again, thanks for all the hard yakka (as the Australian's say)
/Andy

Gets Around
That 32-bit NWChem is the older TCGMSG binary instead of the MPI one I recently built. Right now though our default job submit scripts wouldn't work with it if you selected more than a single processor for a job. When NWChem releases their next production version (I hear in the coming months) then likely I'll get rid of the 32-bit binary for NWChem and only build a 64-bit MPI distribution. You can use our compute server registration capability though to override the default job submit script, but then again I found I couldn't get "parallel" to work anyway with TCGMSG for NWChem 6.1.1 which is the whole reason I switched to an MPI build. Someone who is more adept at NWChem though might be able to figure it out--seems strange that TCGMSG is the default for building NWChem when it doesn't seem to work on multiple processors.

-Gary


Forum >> ECCE: Extensible Computational Chemistry Environment >> General ECCE Topics