12:47:32 AM PST - Mon, Dec 17th 2012 |
|
Quote:Huub Nov 28th 9:41 amHi,
SHMMAX is a kernel parameter that specifies how much shared memory a process can allocate. As far as I can see Ubuntu seems to set this to 33554432 bytes (approx 33 MB) but NWChem will by default try to allocate about 200 MB. As a result the calculation fails right at the beginning. Setting SHMMAX as an environment variable is not going to solve this because it is a kernel parameter. You can set SHMMAX to anything you want in your environment but if the kernel is not going to let you have that much shared memory it won't help. So what you need to do is to change the kernel parameter. The following page shows how you can do this: http://www.linuxforums.org/forum/red-hat-fedora-linux/17025-how-can-i-change-shmmax.html. Obviously this page refers to a different version of Linux but I don't believe these kinds of basic things differ much between different Linux distributions. You may need root permissions to do this, so you may have to ask a system administrator to do this.
I hope this helps,
Huub
Hi,
I have already set shmmax through /etc/sysctl.conf to a number larger than the one stated in the error message, across all nodes. The problem persists, however if i use mpirun across 2 cores locally, NWChem runs fine. Is there any other likely cause for this problem?
Thank you.
|