| |
---|
Infrastructure needs/advances | DD: BP is working on troublehsooting QCA server issues DD – Working on getting environment running on Max Planck Institute (MPI) cluster DD – We got Lilac allocation back up. It was a problem with Lilac monitoring tool that couldn’t handle large array jobs. But now we’re up to 8000 array jobs from 1000.
|
Throughput status | OpenFF Protein Capped 1-mer Sidechains v1.2 - 42/46 TDs SPICE PubChem Set 2 Single Points Dataset v1.2: 121428 from 121383, almost complete, around 100 remaining. 100 incomplete, jobs stuck? DD – What can happen in some cases is that managers hold onto jobs for a long time and don’t report that they’re not being worked on any more. The solution is generally to kill those managers. But we don’t have any managers on PRP (I killed them last night), and I turned off the Lilac managers. PB – Maybe they’re TG’s managers? I’ll ping him DD – But those are time-limited, right? PB – They have a 14 day limit DD – Agree, let’s ask Trevor to shut down his workers PB will tell TG to turn off UCI workers DD will shut down his local in-house manager Â
SPICE PubChem Set 3 Single Points Dataset v1.2: 95725 from 69397 25K calcs in one week, slow throughput as the week before DD: Lilac is up to full capacity again, this shouldn’t be an issue now.
|
User questions/issues | DD pushed Pubchem set 4 to compute as set 3 is nearing completion. Jessica may submit a small set of torsion scans ( around 100) and corresponding optimization set (100 * 50 confs) CC – Submitting new torsiondrives with different starting coords to hopefully get around clashes.
|
Science support needs | JW – JH, to keep you in the loop, DD has been asking about whether he should prioritize developing the QCEngine optimize-then-single-point procedure. I’ve told him not to prioritize this since we need to get the F@H stuff done. So it will be a while in the coming
|