Frequency Analysis Grid Density Problem


Click here for full thread
Forum Vet
Reduce memory usage
I have analyzed only the Lichtenberg folder. Quick browse on the other two folders seems to indicate that your job ran out of time.
The job reported in the Lichtenber folder was killed because it ran out of memory. If you look at the error file slurm-*.out, you will find the line
slurmstepd: error: Step 10937389.0 exceeded memory limit (60372784 > 59392000), being killed

Since your input file might be able to use up to ~8GB and you has 24 processes/node, you might end up using 24*8 =192GB.
In reality, your system allows to use only up to 60GB, therefore you need to
either drastically reduce your input memory line
or use fewer processes on each node (say 8 or less)