2023-08-14 Meeting notes

 Date

Aug 7 2023

 Participants

  • @Matt Thompson

  • @Lily Wang

  • @Brent Westbrook

  • @Jeffrey Wagner

 Discussion topics

Notes

Notes

  • Results from larger (~1600 conformers) dataset

  •  

  • MT – Future tickets/what should I work on next? Could defer until after we’ve checked accuracy of current stuff. My thinking right now is mostly “tidying up”/”making a CLI”

  • LW – Looking at project page, it looks like next stage would be phys prop calcs?

    • MT – Big remaining step for conformer analysis is MM minimizations.

    • LW – I think BW and I are in a good place to test the accuracy here. Is there parallelization?

    • MT – That makes sense. Then the next thing I should work on is roadmap item 2, the evaluator interface. Roadmap item 3 (support for custom parameterhandlers) is either easy or already done. I don’t actually know how Simon used evaluator to run these benchmarks.

    • LW – I’ll see if I have any SB scripts around. He had a codebase with scripts from the benchmarking.

  • JW – Should we focus on making the workflow more resilient/making more intermediate checkpoints?

    • LW – That’d be nice, but it might be a lot of work.

    • MT – That could be tricky. A lot of looking at the database and judging whether there are conformers/results where they’re expected. If a run dies midway though, the confs that were already minimized will be in the database. So you should be able to hook back into the databsae at that point and resume - So there’s a decent sense in which the database is a natural checkpoint system.

    • LW – Is there any way to tell when an error has occurred? Like, if we’re going through and there’s a molecule with 3 confs but the computation ended because a worker crashed, how could we handle that?

    • MT – There are a few options, I’d need to think about it. Right now there’s no elegant system to handle that. Right now there’s a bare minimum of logging - so info about what crashed and why isn’t handled super well.

    • LW – Ok, sounds like this would be helpful, but we can come back to it if ther’es not bandwidth now

    • JW – The QC stack’s quite-successful error policy is “if it fails 3 times, stop trying”. This can be overridden by user intervention via python.

  • JW –

    • Nagl charges? Could be faster than AM1BCC.

      • LW – As an update - I’ve made the 0.3.0 release that’s broken a few things.

      • MT – Gotcha, I handled it.

      • MT – One thing about nagl - When Nagl 0.X comes out with nagl-models 0.Y, and we’re ready to benchmark it, how will this be expected to fit into the pipeline?

      • LW – We’ve discussed that Nagl will live as another backend that provides AM1BCC charges

      • JW – This can be handled in the forcefield - When we want to run a nagl model that isn’t production-ready, we’ll call it through the ChargeIncrementModelHandler. Then once it’d production-ready we’ll have it in the chain of things called for the ToolkitAm1BCCHandler

      • MT – Ok, so this will be handled at a software level

      • JW – Yup

    •  

  •  

New tickets: