JW – I spoke with LW about long-term development tasks to support science team. We brainstormed a bit but didn’t settle on a single direction. She may reach out to you. Any particular interests in benchmarking direction?
MT – Interested in the benchmarking direction in general. The two big liabilities would be 1) getting everyone to agree on specific project scope (very political/interpersonal) and 2) being in an infrastructure role but needing to defend benchmarks against scientific criticism.
JW – Thanks. I’ll bring this back to LW. I’d try to have the science team set as much of the scope as possible.
MT – I’d also be interested in working on a chemistry checker if it were specced.
JW – MH mentioned a typing package - not sure what was meant. MH explained it as “we could all use OpenFF units but serialize quantities to json differently”
MT – The default behavior for Pint quantity serialization is that you can directly stringify and listify. But it is inherently an object, not a string/float/int/built in type. So this doesn’t settle the choice of [val = (magnitude,unit), val=“magnitude unit”, val_unit=”magnitude”).
MT – Right now (somewhat opinionated) serialization code is in Interchange (using some decisions driven by the intersection of pint, numpy, and pydantic). MH suggested breaking this into a standalone package.
MT – I DON’T think it’s appropriate to put this in openff-units, since I don’t want that to be a dumping ground. Kinda same with openff-utilities. So we were thinking about something like openff-pydantic (MH may have called this openff-typing)
JW – Can any pydantic object become a json? Is it just pydantic → dict → json?
MT – Yes - That’s a fundamental guarantee of pydantic. But the path to json doesn’t always go through the dict representation.
MT – Ultimately we will have a function that eats openff.units.Quantity and returns something made of build-in types that can go to json. The question is where this will live. In an openff-pydantic package, this would be set as “the way to turn an openff-units object into json”.
JW – It sounds like you’ve thought a lot about this and I’m happy this is moving forward, so use your judgement on whether to make/re-scope repos to help this move forward, and just loop me in if you want feedback. If you’re up to talk more I’m interested to consider adding this to openff-units.
MT – I’d like to see how this goes - If it works well then we could consider moving it, but I don’t want it to be able to break our stuff in production. Stages would be:
(current) - Serialziation code in interchange
1) Move serialization code out of interchange into an early openff-pydantic package (then we need to support this in production)
2) Consider merging into openff-units (MT doesn’t see much value in merging these)
JW – Thanks for supporting MH on SMIRNOFF spec updates. Ideas for how to improve the process? Like, “here’s a list of what requires an EP versus what doesn’t”? The potential fields are unstandardized because I lost steam midway through and we never had a process for formalizing changes.
MT – I understand why things are kinda messy, mostly interested in making things better in the future. I see that there are some things that are obvious typos which clearly don’t require the whole committee. But there are other things that are more complex - the tricky thing is that there are cases in between that are exceptionally hard. Supporting MH on this is requiring clarification of some points.
JW – I agree with this classification, and that the middle cases are expensive/confusing. So if possible I’d like to make a system to clarify when a version/spec change needs SMIRNOFF committee review versus when things are clearly cleanups.
MT – It could quite possibly be more work to make this classification system than it would ever save. I’m OK with having judgement calls to identify which change proposals should become EPs.
JW – I responded to Stan Wlodeck that we’ll need a few weeks to debug before we can send vsite cases. Updates on vsite stability?
MT – Haven’t had time to work on this since we talked last. No update. Even if I get past the current weirdness, I’d like to see this used internally so we have some confidence before we send it out.
LD question:
[not urgent] Hi Jeff, I see you're in focused work days, so pls feel free to respond when you're free, no rush. At JNS we'd like to include openff in a CL pipeline, specifically using OpenFF with Gromacs (also other softwares in the future). For the moment, we'd like to use both ways (Interchange, Parmed) to see which one fits better in the workflow, is more stable...So I see there are nice notebook example here: https://github.com/openforcefield/openff-toolkit/tree/master/examples/using_smirnoff_in_amber_or_gromacs for both "convert with parmed" and "export with interchange"My question here is whether we're on the best examples or there's anything more updated / better that we can follow
JW will respond - If his workflow involves a protein, then interchange won’t help
MT – For importers, I see there being 3 stages:
1) Importing appropriately populated OpenMM system into Interchange (mirroring ParmEd’s load_topology). This gets us most of the stuff for free - OpenMM has good parsers for everything.
2) Use OpenMM to handle the backend for public Interchange.from_amber and from_gromacs
3) Same functionality, but with native Interchange code rather than OpenMM.
JW – Doesn’t OpenMM use ParmEd for loading?
(we looked at this and it’s likely that some files in OpenMM are copied from ParmEd, but in previous discussion with Eastman/Swails, this code sharing was known to both sides and intentional)
MT – I trust the files in OpenMM more - They’ll be better maintained and I’d prefer to talk to OpenMM than to files on disk.