Roundtable updates | Jeff Wagner Chemical perception/AMBER FF porting stuff openff-1.2.1 release Pavan coming on board – Will be given a feature development task in OFFTK or other core infrastructure. After first 90 days will become more independent and start doing QM benchmarking and dataset creation. Shirts has won a grant to do interoperability. Likely a combination of our current System goals and a parmed replacement. Room for about 1 more FTE. Meeting about this later today/this week
Matt Thompson-- Started lots of stuff. Approved vsites PR – Saw decreasing marginal returns. TG is still tidying up. Lots of open OFFTK PRs. Need to either carry them forward or delegate. Progress on bond WBOs. Kept track of the nice optional things that aren’t necessary for the functionality.Will handle those in separate PRs. Roughly ready for feedback. Are bond WBOs ready for integration into a FF? JW – Not sure. HJ’s training next week will get a lot more people ready to work on this. SB – Do we have work happening for torsion WBOs? JW – Unsure. I think JM’s started doing some work with torsion WBOs by hand, but hasn’t used ForceBalance yet.
Some work on conformer generation CLI tool. Worked on OFFSystem.to_parmed. Thinking about how to divvy up OFFSystem ←→ other format work.
David Dotson Had a wonderful vacation Worked with Jeff Setiadi on pAPRika/Evaluator integration Worked with Dominic Rufa on ligand benchmark set. Addressed torchANI failures in QCEngine. Model files were being repeatedly loaded and this was causing file system slowdown issues Working on JSON serialization.
Simon Boothroyd Finished integration of Recharge into ForceBalance. Now can do ESP and electric field refits. Can do implicit solvent. Handles wavefunciton → ESP generation. SB – Pulling wavefunctions form QCF will require further development. Current datasets don’t have wavefunctions, though they can be enabled. SB – No implicit solvent yet DD – do we want implicit solvent in the short term? SB – It’d be good to start zeroing in on which implicit solvent model/settings we want. DD – JH is looking at the technical side of this, but nobody is looking at the science side. SB – Which calls? DD – Tuesday QCF user group call, Friday QCF submission meeting JW – Is timing of fitting session and chemical perception calls OK for UK?
Automated a lot of the nonbonded fitting. Includes both phys pro and ESP data. Codecov >90%, numerous cleanups/refactors. WRt to Dotson’s PR, fixed bug in evaluator re: mole fractions. Fixed Evaluator bug with empty OMM checkpoint files. Fixes are in Evaluator 0.2.1 release.
Matt Wittmann Working on maximum likelihood framework for FE transformations. Based on Hannah Bruce MacDonald’s code. Getting ready to hand off projects, since I’m transitioning to a new job. Integrated Cookiecutter structure into F@H schema repo.
|
| MT – With 1.2.1 manual parameter fix, how are we going to keep this from happening in the future? Is there MM validation? Is there a general way we can prevent this? JW – I don’t like the idea of MM test set, since it won’t inform us of what the specific problem is or how to fix it JW – Optimistic that tighter restraints on k and more diverse training set SB – Would be in favor of simple test/validation set that’s run before release DD – Agree that these should be integration tests. JW – It took a lot of weird permutations of events – HMR, 4fs timestep, vacuum simulation – to make this happen. Would we test just this permutation, or others? SB – Think that HMR is a common thing in pharma workflows. SB – Think that we should automate FF release benchmarks, like the sort that Hyesu does, but in a more automated, distributable way DD – Automate using Evaluator? JW – Agree – Could automate this or distribute a tarball of tests/benchmarks to run before each release. DD – Could this run on GHA. We get 6 hours of walltime per workflow. SB – GHA may have an option to run CI on a local machine MT – Would like to establish an owner for this, and spec out what science it should cover. JW – Could take a coverage set of molecules, pre-compute AM1BCC charges, and run HMR sims for 10k steps each. Shoudl take ~1 hour. Concerned about how the organization will handle it MT – Not sure about the target number – 0/1000? 7/1000? What’s acceptable?I like the idea of the systematic sanity checks, instead of finding problems as they flare up. SB – It’s concerning that we don’t have automated/comprehensive benchmarking. This is a major weakness until anyone can run benchmarks on a whim. DD – Automating testing for the propyne case by itself its valuable, even without adding a more systematic solution SB – Agree. MT – Agree that the above would be a great starting point. JW – If we started speccing out these tests, I’d think that the “coverage” set is something we can use immediately and get 80% of the quality of a perfect solution Spec: GH action Runs on openforcefields PR that adds a new file Does HMR sims w/ 4fs timestep if a new offxml is being added Loads pre-charged molecule set Molecule set lives in a separate repo Should be seeded from s99F “coverage set”
Also add propyne molecule Emphasize that tests are “canaries” – NOT meant to make rigorous benchmark data, just designed to prevent release of a bad FF. These tests will NOT make data artifacts
Tests run
|