Lots of datasets in flight. PEPCONF has some unprecedently large molecules and success rates are slowing down at around 25% completion.
TG – Could we optimize using HF and then “polish” using D3BJ?
BP – I’ve done molecules of this size with comparable basis sets/methods on smaller nodes. Not sure why this is having so much trouble. 700 basis functions is not that large – I’d consider large to be >1500. Especially using density fitting, this shouldn’t be a memory issue. I was using a different program though.
DD – Lots of unknown errors, opened an issue to improve error handling in the Psi4Harness
PB – Should these molecules be fragmented?
JH – Reading through PEPCONF paper, they used a BSIP method/technique, which is well-suited for these molecules. Also used ultra-fine grid.
BP – Never heard of that basis set, and it’s not available in psi4. The geometry optimizer in gaussian is likely better than psi4’s.
JH – Could submit with different settings?
BP – I’d be cautious about submitting different parts of a dataset with different settings.
DD – Will move PEPCONF to scientific review, and ask John how to move forward.
DD – Increased priority of HJ’s benchmarking dataset.
DD – HJ’s old dataset has one incomplete, it crashed on my machine.
DD – Over the weekend, I completed the BCC refit study for SB.
DD – Given that we’re changing priorities frequently, I’d like to implement a system where we can change priorities/compute tags using GH labels.
DD – One submission timed out in GHA validation. JH fixed this with a patch to QCSubmit
DD – TG has been working on dataset standards