4:11:11 PM PDT - Tue, Jul 22nd 2014 |
|
It depends strongly on the job. Large post-HF calculations can repeatedly read and write enormous scratch files if you do not have a large enough compute cluster to fit all the data in distributed RAM. I have had bad luck trying to run TCE calculations that relied on disk based IO schemes, frequently encountering crashes as well as slow performance. It seems that "use enough nodes to fit everything in RAM" is the strongly preferred setup, and large disk-backed calculations may work poorly regardless of disk speed.
I don't think that a hybrid drive is worth the extra expense over a plain spinning platter drive. The hybrid can accelerate a pattern of repeated access to small units of storage, like a database that has a few heavily used tables, but it will not help for storage of large temporary files like NWChem can generate.
I would strongly suggest that you try a few typical jobs before buying new hardware. If your calculations don't generate much I/O activity, you probably don't need to upgrade or supplement storage. If your calculations do need a lot of disk I/O, make sure that they can complete using disk based storage without crashing. It will not help if improving disk performance just makes your jobs crash faster.
|