2020-02-04 Force Field Release meeting notes

Date

Feb 4, 2020

Participants

  • @Simon Boothroyd

  • @Hyesu Jang

  • @David Mobley

  • @Lee-Ping Wang

  • @Michael Gilson

  • @Karmen Condic-Jurkic

  • @Jeffrey Wagner

  • @Joshua Horton

  • @Christopher Bayly

  • @Michael Shirts

  • @Daniel Smith (Deactivated)

  • @Owen Madin

Goals

  • Define the scope and major areas of improvement for Sage

  • Assign a leader to each part of FF optimization and validation process

 

@John Chodera's proposal for Sage improvement (https://openforcefieldgroup.slack.com/archives/CJ58SSC5Q/p1579556659013300?thread_ts=1579554231.012700&cid=CJ58SSC5Q):

  • Hand-refined SMARTS types to fix the most egregious problems in `parsley`

  • More QM data to provide better torsions coverage, but basically use whatever new stuff we’ve generated

  • Any physical property data at all (and let’s please ditch DeltaH_vap) -- property feasibility study

  • Our infrastructure focus should really be on automated benchmarking. We did a great job in automating force field construction, but we really struggled to assess the force field. Lean on QCFractal/QCArchive to do the QM benchmarking for us now that OpenMM is in QCEngine; use perses for free energy benchmarks; compile everything into a beautiful “force field dashboard” that our Partners can show their managers to get more funding for year 3

 

@Simon Boothroyd's proposal for VdW refitting:

Non-bonded optimizationUNDEFINED

 

Discussion topics

Item

Presenter

Notes

Item

Presenter

Notes

Feasibility studies

 

  • How much time (if any at all) do we have to carry out feasibility studies before doing the formal final parameterization? Potential feasibility studies:

    • replace DHvap with DHmix

    • reduced set of LJ types (maybe H2CO3N, per Michael Sch)

    • Can Optimization datasets be used instead of torsion drives to quickly expand chemical coverage?

    • surgical strikes to improve selected BCCs

    • Wiberg bond order – though is this in fact a preliminary study or is it added infrastructure?

QM Data



Which QM data we need and which data should we use?

Notes

  • @Christopher Bayly – any decisions about virtual sites and WBO-adjusted torsional barriers?

    • @Jeffrey Wagner – this is not within the infrastructure scope at the moment

  • @David Mobley – Is AM1 charges with an additional round of BCC possible? @Jeffrey Wagner – Yes

  • CB – started to develop PM3 based charges at Merck, fixes a lot of functional group problems (PM3-BCC) – doesn’t know how to get this model out of OpenEye

  • DM – We are not necessarily after charges, but more after SMARTS pattern to know where should we modify the charges

  • CB – the SMARTS patterns for BCCs is 80/20 problem – a small number of patterns should cover a rather reasonable number of cases

  • DM – Fitting to some dielectric constant, while keeping some physical properties constant (similar to the previous situation with methanol where the -OH parameters can be then transferred to all alcohols)

  • CB – Is there infrastructure to move charge from atom A to atom B, which are bonded (A-B) – I could come up with a list of SMARTS and corrections and then someone might fit around it

  • @Jeffrey Wagner – if we exclude ions and hydrogens, we have 12 vdw types. If we simply enumerated those in BCCs and treat all hydrogens equally… – removing wizardry

  • CB – removing wizardry is good, but thought of charges you might end up with makes me cringe – probably would want to come up with some specialised ones

  • DM – don’t want to give more parameters to fit to @Simon Boothroyd, data is already scarce

  • MS – scale groups together possible – if sigma changes, then epsilon has to change (correlated) – dimension reduction approach

  • DM – is there time to do this?

  • MS – already have some data ready, might be relatively quick

  • SB – we should first sort the vdw parameters and then focus on charges

  • MG – if we do surgical strikes, how do we know which “organs” to remove?

  • DM – the most systematic errors are associated with OH. If you have enough data, you can look at which group performs the worst with respect to quantity X

  • CB – I researched systematic errors based on functional groups. I think that covering 20% of functional groups can give us 80% of the benefit

  • MG - Is there a procedure determining which are the outliers?

  • DM – if errors is distributed randomly across the functional groups… Find the groups associated with the highest errors

  • MG – Could we check which BCCs are overrepresented in the “most wrong” molecule charge results

  • SB – some preliminary studies done – bottlenecks are predominantly computer time (not human time) – add to the feasibility studies

  • Prioritizing feasibility studies:

    • SB – Charge fitting should be low-to-mid priority

  • CB – speaking on behalf on industry members – most industry members are happy with AM1-BCC charges, although they shouldn’t be. If we’re gonna cherrypick something, we should focus on WBO – it’s new, industry folks would love it – it’s a clear step forward

  • WBO is an infrastructure issues – @Jeffrey Wagner can make it a priority and push a release in 4 weeks, if he cleans up everything else from his to-do list – no open source alternative ready

    • API won’t be perfect

    • Pull Trevor off WBO, reassign to something else

  • A. Roitberg has an experimental patch, but it’s not ready

  • DM – JW could send an email to Cresset and talk to people at Amber workshop to see if WBO support could be implemented sooner

  • CB – even proof-of-concept should be sufficient, even if it depends on some closed source infrastructrue

  • DM – I agree with Chris here, having an infrastructure is the most important

  • LPW – is there a way to get WBOs from Mopac?

  • DM – there are a lot of issues to work out here, no timelines have been discussed yet

  • MG – Is it OK to remove other things from your to-do list?

  • JW – it could be, I’ve been looking at this for a while

  • MG – what are the other things you might not get done if you focus on this?

  • JW – passing on meetings to others, accumulating some technical debt, but it should possible to absorb this

  • SB – is there a reason to push Sage to May?

  • DM – doesn’t have to be, it would be a milestone meeting to discuss feasibility studies, benchmarking, use in-person time

  • CB – would make industry happy – here’s new release of Parsley with provable improvements and validated, but also that we have Sage in the cross hairs and that it will be incorporating WBO into torsions (just for torsions on conjugated bonds)

  • LPW – it is a good idea to have some force field improvement to talk about (parsley 1.1). Could we include some vdw parameters to include?

  • SB – we would work on our feasilibility studies ut by mid-april, save one month for computation

  • KCJ – We’ve never really published a benchmarking study

  • DM – “Concrete proposal”:

    • Make a minor release in the next few weeks based on Hyesu’s new parameters + complete refit → Parsley 1.0.1

    • Make another minor release in the next few weeks based on Victoria’s identified bad parameters → Parsley 1.0.2

    • Decide which QM data to use (going back to our existing QM datasets and improving parameters instead of expanding coverage)

    • We do another refit + release based on Hyesu’s new data and Simon’s PE work → “Sage”

    • WBO torsion interpolation will aim to come out by early March

    • WBO torsion FF would be “Rosemary”

    • Put John in charge of automated bechmarking – Some set of QM data + condensed phase properties

      • Decide on dashboard architecture (web dev?)

      • Decide on personnel and goals/data

  • DM – we need an automated benchmarking package, it should be easy to plugin OPLS geometry optimizations etc

  • MS – we could focus on automated benchmarking infrastructure and get it ready for the meeting in May – we fixed Parsley based on our benchmarking

  • DM – we are doing a full refit all the time

  • CB – If we can make a new release that improves

  • KCJ – So, for May meeting, we want to automate benchmarking, fix worst examples in parsley,

  • DM – WBO and AM1-BCC working in May, but not applied to a new FF

  • JW – I will assume that WBO is the highest priority on infrastructure

  • MG – Simon could focus on exploratory studies for vdw parameters

  • JW – what do we mean by automated benchmarking?

  • DM – @John Chodera feels the strongest about it, we might consult with him

  • DS – comparison to QM data could be run with QCArchive, a lot of things discussed today we could run automatically already

  • JW – we have a lot of dispersed people and tools that could do some of these things - how do we consolidate that?

  • DM – we don’t have a dedicated person at the moment

  • DS – doing a dashboard is not that hard nowadays

  • MS – showing data priority, making it pretty is secondary

  • DM – who decides which QM dataset we use? Either my lab or LPW lab would be good candidates

  • LPW – we didn’t play a role in selecting molecule sets during the last fitting procedure. I had some time to think about it over the past year, but I still lack confidence that I would be able to come up with a good set of molecules. I wouldn’t mind having a secondary role here, I also want to finish the Parsley paper as soon as possible.

  • DM – Hyesu, @Jessica Maat (Deactivated) and @Trevor Gokey and myself can work on this.

  • DM – what chemistry do we not have enough of will be a challenging task

  • CB – this question will change as parameters change. We need an adaptive method – I’ve changed this parameter here, do we need to run more molecules?

  • LPW – Make sure that new parameters are carefully tailored to only apply to the molecules we identified as bad – true positivies / true negatives vs false positives / false negatives (confusion matrix for newly added parameters)

  • CB – virtual sites exist in other force fields so WBO would be a greater innovation, but virtual sites would still be of interest to industry people

  •  

Action items

@Hyesu Jang will lead the QM part of the optimization process
@Hyesu Jang@David Mobley@Trevor Gokey@Jessica Maat (Deactivated) will lead QM dataset selection
@Simon Boothroyd will run LJ fitting and @Owen Madin will assist so he can take over in the future
A leader needs to be assigned to automated benchmarking efforts (@John Chodera might have a good idea in which direction to go)
@Karmen Condic-Jurkic to organise a regular group call for FF releases every two weeks
@Karmen Condic-Jurkic to organise another meeting to discuss priorities for the meeting in May

 

Decisions

  1. There is no rush have Sage ready for the meeting in May. Meeting in May should be use to present scientific and infrastructure advances, and a Parsley minor release.
  2. A minor release will include minor adjustments in chemical perception based on @Hyesu Jang's new parameters and Victoria’s identification of bad parameters. This process will rely on the existing QM datasets (used for Parsley) instead of expanding coverage.
  3. Sage release is likely to include new physical properties / vdw parameters based on Simon work and WBO torsion interpolation could be included in Rosemary.
  4. A general rule of thumb for releasing FF under a new name – when new data and/or new science is used in parameterization.
  5. Set up a regular group call every two weeks for FF release topic. This call can be canceled if necessary.
  6. @Jeffrey Wagner will focus on WBO infrastructure implementation and reduce his efforts in other areas.