Item | Presenter | Notes |
---|
Project updates | BW | Past week Refiltering industry dataset LW: Is this meant to be the required way people do it (re: Github PR)? LW: When would we want to re-filter it? LW: Should we over-engineer it and put dates/numbers on it now? Might be good to keep some standards up to date (Everyone): How to handle outliers? Unclear, kind of intersects with other discussion of bad QCA records. Table until tomorrow’s meeting? Also would we want these versions to be a “release” dataset
qcsubmit and bespokefit PRs lipidmaps dataset test organometallics dataset
Next week Run benchmarks lipidmaps dataset organometallics dataset
|
Project updates | AMI | Past week Next week More on DDX dataset--re-submit stragglers with new guess, start optimizations Look into NAGL architecture Look into testing dataset
|
LW | Past week: Next week: Protein stuff? Evaluator (virtual sites) Interchange follow-upsWrapping up PRs (QCSubmit fix, etc) Resubmitted hessian dataset Monitoring QCA workers yammbs PR Benchmarks
Next week
|
Project updates | AMI | Past week Next week NAGL2 optimization dataset Hessian DSs (Freeze phosphate angle refit?) (NAGL2 testing dataset--examine exisiting coverage and maybe setup opts) Finish standards
|
| LW | |