2022-08-25 Protein FF meeting note

Participants

  • @Pavan Behara

  • @Chapin Cavender

  • @Michael Gilson

  • @Trevor Gokey

  • @David Mobley

  • @Diego Nolasco (Deactivated)

  • @Matt Thompson

  • @Jeffrey Wagner

  • @Lily Wang

Goals

  • Timeline for protein FF benchmarks

Discussion topics

Item

Presenter

Notes

Item

Presenter

Notes

Timeline for protein FF benchmarks

@Chapin Cavender

  • CC – Main reason to sync up on this is to give DDotson a deadline for F@H project - When do we expect to start running protein-ligand benchmarks, and when do we need results? Big picture, first goal is to run these calcs once for benchmarking, but the bigger second goal is to have this automated to the extent that we can run it routinely. The big upcoming deadline is the NIH renewal in March. I’d argue that this won’t be necessary to have done for the NIH renewal in March - It’s of more interest to pharma partners.

  • JW: Relative value compared to NMR/xtal benchmarks?

  • DM – I don’t think it’ll matter that much unless we see dramatic improvement.

  • MG – Could be good to have a few runs showing that it’s not worse.

  • DM – The reviewers will probably have a prior assumption that, if this looks alright by other metrics, it’s not dramatically worse for PL binding calcs

  • MG – Would be good to loop in MShirts.

  • CC – That makes sense. This seems more important to the delivery of Rosemary than to the NIH goals.

  • MG – Do we have a deadline for Rosemary?

  • JW: I think whether graph charges will be included in Rosemary or not, and other parts should come together for Rosemary release. But, most of the work comes from Chapin’s protein FF.

  • MG – Do we have a rough window when we’ll expect it?

  • JW: I think first half of 2023 would be a potential window for final release.

  • DN – We’d discussed having an internal deadline that we wouldn’t advertise publicly, but haven’t put our foot down on a date

  •  

  • MG: PLB means David Hahn’s PLB?

  • CC – Yes, reproducing the Gapsys paper. However there’s a lot of discussion about streamlining/automating the

  • JW: To reproduce David Hahn’s work is difficult since some of the poses are curated by experts and we’re trying to streamline the workflow taking out as much human intervention as possible.

  • MG: I worked on the Merck fep-benchmark and I do see some implausible binding poses on some PLB.

  • DM – Yeah, all of these sets have strengths and weaknesses, which is why it’s important to put in community effort.

  • JW: I hear the bad poses will be replaced in near future with the OpenFE work. Right now a reproducible workflow is the main priority along with correcting poses, etc.

  • CC – Ok, I think the conclusion is that we won’t aim to have the protein-ligand benchmarks in place by March.

  • JW – Let’s offer a “need by” date of January.

    • CC – That sounds good. Then if things take 3 months to run, they’ll be done by April.

    • PB – So that’s expecting a release candidate FF by January?

      • CC – Yes. I’m expecting to run the first set of benchmarks using short peptides against NMR data, and that will help me choose from a few release candidate FFs. Then I’ll take the winner and hand that off to the F@H team in Jan.

    • JW – The difference

  • JW: I think we wanted to run ff14sb+gaff, ff14sb+sage, rosemary rc1, final rosemary version on F@H.

  • CC – I remember deciding to run the NMR benchmarks using ff14SB and the Rosemary release candidate, and that if we had additional compute we’d do more sampling/more systems rather than more force fields.

  • CC – The previous iteration of PLBenchmarks used CGENFF and OPLS. Will we do those as well?

    • DM – The inclusion of CGENFF and OPLS were largely due to Vytas’s access to those force fields/workflows. I don’t expect us to include those.

    • CC + JW – Agree.

  •  

  • MRS: are we planning to run them on OpenMM, or something like GROMACS (can take advantage of CPUs, which OpenMM cannot).

    • General – Both will be OpenMM

  • MRS: What if any FF are we planning to compare to (if this is already written up, just point me to the project page!)

    • LW – The plans above look good. I’ll make a page that has this information and link here.

Running ESPALOMA force field through same protein FF benchmarks.

@John Chodera

  • If he wants to?

  • CC – I think it will be possible to just plug in espaloma in the existing benchmarks repo once I have run them on the planned FFs.

  • JW: As long as we have an openmm system.xml generated with the espaloma workflow it should be easy to get it work on F@H but other than that rest would depend on compute resources. An allocation was made for projects at the beginning of the restructure through OpenFE.

  • JW – So I think the answer to both the “NMR benchmarks” and “protien-ligand benchmarks” questions are “yes”, but we may need external folks do to the manual work to get these to a submittable state.

  •  

  •  

Someone in Shirts group running protein crystals benchmarks?

@Michael Shirts

  • CC – Student in MS group asked about running crystal benchmarks. My plan is to spearhead NMR observable benchmarking. There’s good information in the LiveCOMS review about how to do this if they want to do it.

    • DM – I’m in contact with Mike Wall and David Wych who could help with this. But it’s tricky so I don’t know that we should go it alone.

    • MG – Peter Stern and Tobias Huefner may have some tools to help with this.

  •  

TG library charge model

 

  • DM – TG is working on a model for learning/building librarycharges. CC had proposed an experiment with this, regarding covalent modifcation of residues. So TG has some early results from this and was wondering if there’s a good model to compare this against.

  • LW – What timeframe were you thinking of doing this work?

  • TG – I’ve already got some results. If there are like phosphorylated residue charges available for comparison that would be cool to compare against.

  • LW – I can check into this - I’ll get back to you when I’m in Australia. A good comparison may be the librarycharges averaged against many contexts from Chapin’s work.

  • CC – Those are only for standard AAs. But there are some phosphorylated AA parameter sets in AMBER forcefields that could be used for comparisons.

  • JW: Yeah, I agree that Amber charges would be great.

  • TG – I’m not sure what the AMBER charges would provide - If I trained to those I’d just end up with those exact charges and we wouldn’t learn anything.

  • CC – To me, the experiment is “can your model reproduce AM1BCC charges?”

  • TG – Right now my method makes charges that look like

  • JW: To rephrase, you have a phosphorylated amino acid

  • TG – We’ve worked out what the training and test sets should be - It’s small molecules and AA analogues with PO4s. So the question was can I train a model based on those such that we get good performance on phosphorylated AAs.

  • DM – Question is “Can you build library charges for covalently modified residues without training on covalently modified residues?” - The question is whether there’s another method to make these library charges that we can compare against.

    • MG – Could compare to RESP.

    • TG – This would be a different model - This is a set of librarycharges.

    • TG – There was a discussion that a graph charge model may eventually perform better than other models, so I wanted to compare to something before that’s related.

    • MG – Could compare against vcharge, which uses an electronegativity balancing method.

  • TG – Let’s say I have parameters for a phosphorylated AA, and the error is 2kcal/mol/e-. I don’t know whether “2” is good, but I don’t have anything to compare against.

    • MG – There may not be any great direct methods to compare against. The best test here may be against physical observables.

  • TG – My method takes a bunch of mols from QCA, then I split and make parameters based on the differences in average AM1BCC charge. The SMARTS patterns only tag one atom, and they don’t extend to a whole residue, they just goes out ~5 bonds.

  • MG – An earlier instance of this sort of method was in early CHARMM - Momany and Rone “Charge Templates”.

  • CC – Could try to look at QM-derived charge, like Mulliken charges, and see if they give a similar transferrability error. Not sure how you’d turn that into a number though.

    • TG – I could train to RESP of Mulliken, would need to think about that.

  • JW – Wouldn’t it be tricky to handle large distributed charge systems? Like HIP?

    • TG – Yeah, so for larger things I’ll need to define larger SMARTS that look over more bonds. I can tune this higher, just have ~1200 parameters already so I don’t want to make this too big too early.

Action items

Decisions