User issues, new submissions
|
| PB: #220 - need new qcsubmit JH: just have one PR on qcsubmit blocking release; working on this PB: have one compute spec on this submission DF-CCSD(T)/CBS that will take up to 150 GiB of memory for 16 heavy atoms PB: typically also use 48 cores
JH: working on #223, blocked by validation issues as well JH: ML stuff, adding HDF5 support for QCSubmit instead of a ton of SDFs, can use one file JW: what are the contents JH: conformers and mapped SMILES JW: is this file going to contain the same content as the other files, or is there something fundamentally different here? one thing that makes SDF safer is that readers and writers are not something we’re defining JH: there’s a lot of repeated info in the SDF JW: good point JH: understand concerns on future variability; would like to get a spec down as much as possible
JH: any feedback anyone has on this issue (
) appreciated
DD: concerned about collection size; will run into same issue as before SB+JH: not clear if it’s a single collection with a million conformers, or spread across several collections, or multiple million conformer collections BP: the metadata object for a collection gets very big as more and more objects are involved (molecules, specs), so this becomes an issue in the way collections are currently implemented SB: can see this taking another month for John and Peter to resolve; what is the timeline for next branch deployment? JW: DD, would you be willing to jump onto next OpenMM call to lay out constraints?
JH: is a test submission still in play? SB: Chapin’s dataset; what’s the status? DD: worked with him to set up manager on UCSD resources; can switch on and off at will; waiting for word on new submission status SB: think there may still be some ambiguity on what data, how it will be different from the Cerutti sets; will coordinate with Chapin and see where we’re at
|