2021-04-07 Benchmarking for Industry Partners - Development Meeting notes

Date

Apr 7, 2021

Participants

  • @David Dotson

  • @Jeffrey Wagner

  • @Joshua Horton

  • @Lorenzo D'Amore

Goals

  • Current topics, issues discussion

Discussion topics

Item

Presenter

Notes

Item

Presenter

Notes

SMARTS filter

Josh

  • SMARTS filter

    • uses QCSubmit under the hood; 2-liner

    • JH: would prefer if it was possible to specify SMARTS without numerical tags

    • JW: I don’t mind this at all; actually kinda prefer that there’s one way to do it

  • Jeff reviewing

Coverage fix

Josh+Jeff

  • Jeff reviewing

Calendar refresh

David

  • I’ll refresh the calendar invite; put it on the OpenFF calendar

Updates

 

  • JW: haven’t done much on the benchmarking front

    • unlikely to be able to test Schrodinger

    • would need to request access via Gilson lab; will probably be a “no”

    • one discussion point: when do we want to jump to openff-toolkit usage in the deployed envs

  • DH: gave demo on 3/26; same as partner call, but with more detail

    • Xavier, Bill, and Kaushik present

    • not sure if they’ve given it a try on their own infrastructure

    • Lorenzo starting to work on analysis expansions

  • LD: been talking with Xavier, Bill

    • in particular, Xavier’s suggestion on finding global QM minimum, take the index of that conformer, check MM energy of that conformer, see if that is the MM global minimum (or not). If MM and QM global minimum conformers not the same, give the RMSD between the conformers.

    • DH: Xavi is more interested in the optimized geometry MM evaluation, not the initial geometry optimization; this is basically the match-minima analysis

       

    • DH: could add a column that gives you which conformer was the reference

    • DD: let’s collect the suggestions from Bill, Alberto, and Xavier for this analysis; we’ll identify what they ultimately want (likely the same thing), and then build an approach (if it doesn’t already exist as e.g. match-minima)

    • JW: I wonder if there’s some way we can provide them with composable components

      • I see openff-benchmark as three stages in near term future

      • stage 1: finish out season 1, highest priority

      • stage 2: refactor into a more coherent set of workflow components

      • stage 3: a season 2, new results, etc.

    • LD: once we finish season 1, all data is there, can do all analysis you want

      • agree with Jeff’s assessment

  • JW: for season 1, need

    • 2 PRs from JH, in review

    • Schrodinger pathway, need a reviewer with access to Schrodinger

      • Lorenzo likely best positioned

    • DD: could engage Bill directly; I can facilitate this, worked with him to do live testing many times

      • Any additional software we need in the env file

      • [committed] set up a call for Friday with Bill

  • JH: 2 PRs, SMARTS filter and fix to coverage

 

Season 2

Jeff

  • Suggested a Season 2 in the fall, didn’t say no

    • A season 2 should be preconditioned on a refactor, making the openff-benchmark easy to run continuously in a useful way for the project, by users, etc.

Action items

@David Dotson will set up a working session for 4/9 with Bill Swope along with @Lorenzo D'Amore , optionally @David Hahn
@David Dotson will refresh the calendar invite for this call, put it on Open Force Field calendar
@Lorenzo D'Amore will compile requests for global QM minimum energy comparison together, share with the group for discussion and solutioning
@Jeffrey Wagner will review the SMART filter PR created by @Joshua Horton
@Jeffrey Wagner will review the coverage fix PR created by @Joshua Horton

Decisions