2023-07-24 Meeting notes

 Date

Jul 24, 2023

 Participants

  • @Matt Thompson

  • @Lily Wang

  • @Alexandra McIsaac

  • @Brent Westbrook

  • @Jeffrey Wagner

 Discussion topics

Notes

Notes

  • MT – Had some cool stuff coming together, but it’s trhowing an error right now so no live demo today. Iterating on atabase model. Right now it does some core things that I want it to do. Currently there’s a table of Toolkit Molecule objects. Then another table of QM confs. Then another table of MM confs. Structured in a pseudo-hierarchical way - Each conformer points back to the molecule that represents it. There’s a little intrinsic deduplication, a QM conformer doesn’t know its chemical graph, instead it points to an OFF molecule.

  • MT – Also added some simple QCEngine calls to do minimizations. Will print InChIKeys and energies to stdout.

    • MT – InChIKeys are indices in some tables, and QCArchive IDs should be unique as well. From the code I used from nagl, I think the Hs are explicit in the InChIKeys.

    • LW – Nagl deduplication checks by inchi and RMSD. This keeps redundant/duplicate training data from unbalancing the datasets. Planning to delete this code in an upcoming release.

    • MT – I know which code you’re talking about - I’ve copied it in.

  • MT – A few other shortcuts, like ensuring that an MM conformer doesn’t already exist when resubmitted. Currently working on parallelizing.

    • JW – Would this be able to have tables of different MM conformers that have been minimzed using different FFs?

    • MT – Current prototype only does one MM FF. But in the future we can generalize to have multiple MM force fields.

  • MT – Big current bottleneck is converting QCSubmit Mols to records, and then running MM minimizations. My guess is that runtime bottlenecks are about what we expect.

  • MT – After this I want to add RMSD and TFD. I’ve been doing development on 40 or 50 mols with 70 conformers (one of the benchmarking sets filtered to just C and H). So there may be gremlins when we go to a non-boring dataset.

  • MT – I also want to have better storage of this data - even just a minimal class that wraps a dict.

    • LW – Are you thinking of a pydantic class or something more flexible?

    • MT – Haven’t thought about it too much - maybe I’m after something in between. Like, I’m working off the assumption that you’ll know which OFF molecule it came from, which FF was used, and which QCA mol it came from, and you can do some filtering based on chemistry or something. So there’s an opportunity for subclassing and reuse of data structures.

    • JW – It’s easy to get tripped up using something like ddE - Have to remember what the minimum energy/reference conformer is.

    • MT – I’m not being too strict about meaning of ddE - comparing MM and QM, you’ll have some parentheses and minus signs - I could interchangeable have been saying “RMSD”. So the output of an operation would be one list of QM energies, and another loist of MM energies, and then the analysis would do whatever is appropriate with those.

  • MT – Q: How valuable would it be for one of these to be presented as a live dashboard? So like you could view completed jobs befre the whole dataset is finished.

    • LW – I think a dashboard isn’t necessary early on, but if it could be added later that’d be cool.

  • LM – Goal of this project is to tabulate performance of Sage so we can measure the effect of modifications?

    • MT – My understanding is generally “yes” - You’ll get different numbers out with different datasets and different analyses. These sorts of analyses have been run a lot through our FF generations. The storing of the data is sort of a secondary focus here. It could be that the same benchmarks will be run every time, or they may change. But what I think is most exciting is that this will avoid needing to get additional people/QC data to run. So you can be proposing a new improvement to the flagship FFs, and you won’t need to ask how to validate the improvements, and instead do it yourself. Ideally this will just take a few hours and people self-benchmarking will become routine. One thing that’s been holding the science team back is that people are doing amazing things, but with different benchmarks, and it makes it hard to measure things on an equal and agreed-upon footing. And if we make something general enough to satisfy our own science team, it should be good enough for industry as well.

    • LW – In general, what goes into an FF release depends on how much it leads to improvement, and we haven’t standardized the definition of “improvement”. If you dig through the slack you’ll find instances of people asking how to measure performance, usually resulting in PB having to run benchmarks for other people. And so having it set up for everyone to run themselves should be a big improvement.

    • MT – Also, decisions about product management - When something gets good enough to be a new release is complex and we’re limited in organizational capacity to handle all the research lines.

  • LW – Sounds like good progress. Are there blockers that we can help with?

    • MT – I’m making good progress right now, but what’s your bandwidth for trying to install and run a prototype of this (if it’d even be useful). Though I’d caution that this will be super unstable - both in terms of outputs and API - I’d caution against using it for production work.

    • LW – Are you looking for feedback on the API?

    • MT – Yes.

    • LW – I’ll give it a shot, will try on an expanded dataset.

    • BW + LM – Me too.

      • LW – Could be good for testing different architectures.

    • MT – I’ll post on slack when/if this is ready to use

  • MT – How does this fit into Rosemary plan? Specifically, what more protein-relevant benchmarks should I anticipate coming in?

    • LW – I think the same conformer+phys prop benchmarks will be used. Would be good to bring in CC’s NMR benchmarks, though big protein stability benchmarks probably aren’t necessary to bring in.

    •  

 

 Action items

 Decisions