Skip to end of metadata
Go to start of metadata
You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
« Previous
Version 2
Next »
Participants
Discussion topics
Item | Notes |
---|
Updates | SB AD Finishing off coverage tool + adding test cases Breaking toolkits.py into several modules – Could move functionality for each toolkit into separate modules – Some question about how much to remove duplicative tests, versus how much to simplify toolkits. Looks like there’s a combination of two concepts – Toolbox (mapping low-level calls to RDKit/OpenEye equivalents) vs. toolkit (which is a registry of Toolbox-exposed functions). So individual methods in a ToolkitWrapper/Toolkit could be overridden, using a context manager. Toolkit calls could use dynamic resolution, where they’d try a bunch of toolboxes to find one which exposes each call that is needs. AD – Should we split up toolkits into separate files? into implementaion layers?
MT – What’s the advantage of this design? AD – Doesn’t add complexity, these would replace register/deregister calls that current occur. SB – I don’t think this adds complexity, instead it adds some fuzziness. My biggest complaint with registries currently is that it’s hard to define which toolkits support which ‘features’ (i.e. theres no formal ‘contract’ that says a TK that can assign charges must have a assign_partial_charges ) and which underlying toolkit gets called for a particular operation (i.e. opaque chaining). The way that this could be improved would be to increase the usage of inheritance in the toolkits. AD – Could you say in more detail where inheritance would be useful? SB – For example ELF conformer selection. By default we want to detect trans-carboxylic acids, but someone may want to turn off this check, or extend it to include extra checks in addition to the defaults. AD – The model that I proposed gives the ability to override the methods that are called. AD (Demo's proposed changes) SB – This isn’t true overriding in a pure sense - more just chaining function calls. It doesn’t always give the same functionality as inheritance. This reminds me of smirnoff_hack.py in openforcefield-forcebalance, in that it allows people to essentially opaquely over*write* functions with little warning / provenance for them doing so. SB – Agree re: need for provenance, but don’t necessarily think this is the way to go about it. JW – I see that this fixes a lot of problems, but is this the simplest design to do so? This would be abig change, and would break people’s codes who are customizing registries. So let’s AD/DD/LW/JW will meet on Weds at 7 AM Pacific / 4 PM Sweden to try out new design
MT Worked on virtualsites, only to learn that they’re even more complicated than I though. There are a lot of design point that vsites are going to stress, need to talk more with other devs. Interchange API is at the point where I’m comfortable with an 0.1.0 release. Once we have a bit more input from Vanderbilt, I’m hoping to move this forward. We got a conda-forge team member to review the recipe and it’s available now, just from a special label until I make a more major release Spent some time moving personal notes on science-y issues to issues on SMIRNOFF spec. I’m not optimistic that they’ll be reviewed in a timely way. There’s also a backlog of spec changes from Peter Eastman.
DD Did individual outreach to the 2/10 pharma partners that haven’t yet submitted benchmarking results. Have pharma partner meeting coming up, so I want to present current aggregate results. What else would be of interest to know during that call? DD – Worked with LD on additional analysis suggested by Xavier Lucas and Bill Swope (off-benchmark PR #86). We expect the results of these to be slightly different so we’ll have partners volunteer to use it and tell us which they like best. On QCArchive, modified a previous submission to fix a globbing pattern. This is running now. Submitted industry public set MM optimizations. Some issues with submissions, due to size constraints. I’m going to do some tests locally to figure out why the scale is such a problem. Laying out a proposal for the next generation of QCA usage. We’ve been using qca-dataset-submission repo, leveraging GitHub Actions, for a year now. I’d like to lay out plans for the next iteration based on how we want to be usign it, and how to support STANDARDS v3. Especially, I’d like to reconsider how we distribute work, and how data related to repos and zenodo. There’s also potential synergies with this and PLBenchmarks on F@H. Not much explicit progress on PLBenchmarks, still thinking about design.
TG Re: Toolkit PRs, I was waiting for tests to pass on TagSortedDict – I’ve merged it now. Re: Other PR, I’m trying to make it line up with recharge. I’ll know whether this is good soon. JW – Is this important for next release? TG – If we merged this, then we’d have recharge using the private methods from the OFF toolkit. JW – I’m ok with this as long as the users of these private methods don’t expect a lot of stability. MT – I’m OK with this – We “own” both sides of these private methods. JW – You’ll need to go back to pip installing if we make further changes MT – Switching between stable and pip installs while developing adjacent toolking really isn’t that inconvenient.
I’ll be presenting results of fitting using alkanes (bonds and angles) on Wednesday. I found a possible bug with computevirtua
JW PB LW
JW Presented at MolSSI – Not much feedback. Will be watching progress on MMIC and MMElemental, but can’t spend significant effort FE workshop Connor project update – Good progress on AM1 optimization analysis+constraining. Analyzed how much charges change w/ and w/o restraints. Some question as to “gold standard”. Feel free to join Connor update meetings on Weds. Planning for followup workshops. JM – Theme refresh Preparing for next release – final word from TG on his two PRs? Also want to get charge rounding in. Will be releasing 1.3.1 and 2.0.0-rc.1 forcefields ASAP
|
|
|
Action items
Decisions
0 Comments