Throughput status | No calculations over the last four days, may be some server issue? BP: I do see lilac and PRP pushing back results DD: Me too. Need to dig into errorcyling. I’m already running the errorcycling on Vulcan here since it is choking. PB: It’s also possible these are erroring out after resubmissions. DD: Yeah, take a look at them and archive them if so. OTOH, JW should we use aws for errorcyling when GHA is running out of memory? JW: I would advice against that, it’s not the money issue but effort wise it is not worth the time now since we can do errorcycling locally. DD: Agreed.
OpenFF Protein Capped 3-mer Backbones v1.0 RNA Single Point Dataset v1.0 default: 8864 → 8868 (100 error) wb97m-d3bj/def2-tzvppd: stuck at 3468 this week, submitted fat node jobs on UCI cluster to get this out by this week, they’re in queue because of a scheduled maintenance.
SPICE sets with openff-default spec SPICE DES370K Single Points Dataset v1.0: 186218 → 679406 (remaining - 11875 errors, 83 incomplete) SPICE DES Monomers - 0 → 37150 (remaining - 250 errors) SPICE Ion Pair - 0 → 2153 (remaining - 703 errors) In queue: Moved pubchem sets to scientific review column, errorcycling is choking with out of memory errors, will push them to compute one by one.
|