Carryover thoughts from Monday
JW – Bespokefit/forcebalance - Even if we roll out bespokefit 2 and switch to smee tomorrow, we’ll need to support bespokefit 1 (and therefore forcebalance too) for some amount of time. How do we want to plan+communicate this deprecation?
Notes
Two categories of thought: What do we want to do? What do we bring to the table?
Scientific questions and user needs
User needs is broad/several categories, including science team, but also force field users, …
Values=particular resources we have, commitment to open values + Strengths=partnerships with academic groups and ability to collaborate with industry
One of the fundamental ideas is knowing whether infra serves science team, users, or both?
One opinion is “yes, that’s what we strive to do”. But in practice while it’s easy to prioritize between Lily and outside users, it’s often hard to know how to identify where infra is blocking.
There’s necessarily some tension between “user needs” in the sense of “downstream people in labs we have never heard of” and in the sense of “stuff the science team needs”
Our infra is mostly ahead of our science
Maybe we need to think about longer term infra and anticipate future needs to enable more science in the future balanced with the urgent needs to keep and maintain software
Scaling is important, we have to have things that scale and are not just a slog, or it will die.
What’s more important? industry funders or things that will get NIH grants
We did not used to have divisions, but as we grow we divided into teams, but there is more and more crossover between the scientific and technical fixes. People assume which responsibilities go to which person.
The fix was small when Lilly made a recent technical fix. NAGL is an example of the science team doing the Infra team’s job.
We need to not let the division be a bottleneck to advancements. Someone who is not good at software who comes up with a good idea then we would need to devote infra effort to that.
We don’t want the idea that the science team does not do conda releases to hold back advancements as lilly pushing the NAGL release was useful.
We have an unwritten priority of who we support
With the science team being smaller. We have way too much science, but Lilly is frequently the only one who can do it.
Largely agree with what’s being said. With NAGL I think it’s time to do the handoff. But there’s friction when there’s a handoff from science to infra and it’d be good t
The trainees should be doing work that leads to publications rather than working on conda releases
With 5 people on staff we are limited in what teams we can make
I think NAGL is well designed, but the Besmarts handoff is an example of taking a look at the code to see if it’s good. We have limited experience in suggesting improvements to an existing code base. We don’t have a real process in place to move between academic and OpenFF code. We want to rethink how these transitions look. Maybe put a whole block on the calendar to review changes.
Don’t we have this already.
Yes, we have a block to review BeSmarts Epic
BeSmarts Epic is about knowledge transfer
How did you include that is not in scope:
That is my understanding
It could be
We want to use 100% of out time with JM to work on the understanding the code
We could have a video or blog post on the basics of code writting like minimizing dependancies and keeping documentation
That could be a good idea, but I think we need backing on the org level. I agree there should be something like that.
There is still scope to expand beyond the 5 person team. Take the charge model for example. We can work on that in 6 months, but I’m not sure how that maps to the OpenFF plan.
Coordination on the long term. Julianne learning FF fitting is an example of that
Re: Knowing whos job different things are - That’s what we’re here to discuss. Having everyone in the room is exactly the right venue for this discussion.
This is the beginning of the road mapping session and we could dive into this later, but while we have our minds wide we should discuss to morning topics
Not sure how much we need to talk about our commitment to open source and open data, but I would love to hear peoples opinon on the larger vision of OpenFF
I don’t get anything from our industry partners. Them wanting DNA does not map to my prioritization and since they are funding us
Part of why this project exists is that people want binding FE caluclations, but you could say they don’t really want FFs or interchange, but they are not going to ask for new functionality. They want to model a protein with PTM and model that easily or combine FFs. MT is doing things important for our functionality, but no one has feedback until you offer new functionality.
Going to the Ad boards is a no brainer, but having people feel like their needs are being listened to by asking people what they struggle with. Would it be useful to MT to hear that?
We often do not have enough team time to check-in with industry, but using people like MT to do these check-ins and see how they are using the software and what issues they are struggling with.
I went on a mission to have more one-on-one meetings, but it takes a lot more time than the 1 hour meeting. It takes a lot of time to prepare and funnel emails which are frequently out of scope.
We now have people (james) to say no and say what is out of scope
It is also important for retaining current industry funding to have these contact points.
I imagine three buckets of users: ones who do not care about details and want the FFs to be better, people who want protein FF, DNA FF which we are working on but don’t have a clear message to say, and then there is people who are trying to do things that we have been able to do for a long time but people don’t know about that functionality.
To connect this back to users. There are several tiers of users and the level of care/resources we need to devote goes down. 1. our selves and our sponsors and people who are very engaged and facilitating important science 2. boarder community and outside FF developers 3. Downstream users who don’t support us. Not solving a problem for people who don’t fund us is almost a funding opportunity.
Shortly I want us to move into a pure brainstorming phase, but do we have any other thoughts on what we are aiming for.
Is our goal narrowly serving binding FE calculations or is it supporting more general systems. We are able to build generic lipid systems which isn’t on the explicit road map. I think we should consider thinking more broadly. Are there other applications where there is money/resources for that OpenFF could get involved in?
My answer is “yes”, the consortium is fairly narrowly focused on small molecule binding, with the exceptions of genentech’s funding for Jen’s project, and moderna’s lipid FF work. but aside from that it’s all small molecule stuff. Other than that we’re interested in stuff we can facilitate without a ton of additional work (like Cole and Space group).
I just wrote a grant for ionic liquids which are relevant for biology
DC is a success story where people our using the tools to do cool science, but not taking up a lot of our resources
Would be good to measure how many people are using our tools, will mostly help with attracting more federal funding.
I think we came up with a plan for telemetry which could work
We’ve had that in the backlog for a while now. (jokingly) someone should really pick that
We are talking with the eco-infra team to try to get tools to track our usage
Can I step back? (JE: yes) What are we trying to build? MS alluded to the tension between people who care about SM BFE and those in material science. Maybe if we had more people like DC who pick up the tools and do broader science, but maybe there are opportunities for wider science
There are some other resources (0.5 M) and I just sent you the draft OMSF job description
Our FFs moving forward. Will there be more branches. Maybe JH makes the best lipid FF but how will it work with small molecules, proteins, etc.
Our core hypotehsis is that these are additive, basically that we’ll never have a FF branch.
It felt like you were asking two questions: 1. How much is in scope? Materials and all those things? 2. How many spin-off FFs?
I was meaning to ask the first one
The consortium funding is narrowly focused. In scope is what the funders say is in scope. Two ways we can broaden is that the funding interests get more broad and the other is an additional sub-project spins up with their own funding. I do not see the Pharma scope broadening but maybe there are other sources.
Been thinking a little bit about how to get more of these styles of collaboration, like with Riniker group. We’ve tried a bit to get them on board but it never really stuck. So I’m initerested in hearing ideas about how to build deeper collaborations.
My instinct is you try to build a number of collabs but the ones that work will stick
Like DCs situation where you get the funding and that brings us together
I am looking at European funding sources so maybe we need a little European satellite group
My thinking is that we didn’t have to work hard to get DC in, we just opened the door and he came in.
We need the software to support the collaborations. OpenMM developers turned out to be toxic which dropped collabs to add additional functionality there.
Brings up a big question about values. I want to ask how we can more fully embody the idea of bringing in collaborations by providing tooling that more people can plug into. We need to make a collaborative and more positive and inviting working environment.
It would be good to have clear rules of engagement for working with others. Peters toxicity keeps the code base more narrow. There is a tension between having a core set of features and not acknowledging anything outside of that. MT is at the fore front of the Smirnoff plug-ins where we are unable to verify the efficacy of these. While we want to let anyone plug-in what they want, but once there is a code base there is an issue of maintenance.
It’s possible we could support a zoo of downstream force fields, that users benefit from, but that we don’t take responsibility for.
I feel like we do take responsibility for smirnoff-plugins.
With anything goes we just sent an example to a partner, which is a pretty serious thing that we therefore DO kinda take responsibility for. I am not a fan of using smirnoff plug-ins as an example. In part this has worked great because DC has taken this and ran with it partially because they have not needed much direct help. JM has written the pdbloader which can load all these different types of molecules. Are we a materials group? At the consortium level no but we could have a spin-off project and those are a huge win.
I use the phrasing “for free” to highlight big wins we can obviously prioritize. But once thigns tkae like an FTE-week, then we have to rethink it a bit.
I can be turning back examples on that if we want to test that soon
My concept is we take about 30 minutes. This is not a discussion. This is a space to put out crazy ideas without much justification. The ideas is what can we do? It’s great to hear building ideas, but this is not the time to hear “but” or “no”
I think co-optimized water can really help a lot of things.
We have the technology to do this today
Move to OPC3 - need to figure out what needs to be done to make sure nothing breaks
Switch the toolkit from chem-informatics rich information to only having the information we need
Most of our FFs do not include stereochemistry so removing it would save a lot of run
There are a gazillion polymer builders from people in ChemE. What if we re-invented the wheel or wrapped existing ones
MuPT is in the process of this
MuPT - NSF collab grant for 5 groups where some of the software development money hires an OMSF person to help support interface stuff
In an era of financial concerns from the federal side. We should have some method of expanding what we are doing with a GUI or collaborating directly with Pharma on drug discovery projects
We are in some degree with
Direct polarization in a FF
What do we need to do to get GROMACS to implement this?
Re: prioritization - Would be good to have a more structured way of asking which company people come from and a way to assign priority that way. Some publicly visible tagging system.
At the end of evaluator calculations to would be good to get a summary of what was used (barostat, thermostat, etc.)
Better logging would alleviate the issue of evaluator doing a lot of stuff with a “just trust me” mentality”
Unified logging across more tools
Add protocols for osmotic coeffs, etc
Adding ion FFs to the database instead of using Amber
Consistent logging strategy across whole infrastructure.
Machine learning for force field development.
More clearly communicate goal of OpenFF in terms of developing a universal FF (like, from the perspective of an outside lab trying to understand where we’re going, ex info about biomolecule FF, in particular what we want it to look like). Currently a lot of this communication is between OpenFF and sponsors, but it’s hard to see from outside.
Commandeer scientist time to write more blog posts “It’s 2025 why do we still care about a unified FF”
Do smee / make it so lots of people can rapidly fit FFs and do FF experiments far easier than before
Zoo of force fields - We tend to get stuck when telling other folks how to fit a FF - what if it only took a day on one computer instead of longer on more comps. This would make things way more inviting for folks who wnat to try training their own FF.
Documentation is well written, but I think it would be a good idea to add example workflows outside of the standard use cases
What if we reduce maintenance costs to zero. What if we pinned versions and distribute from docker
Does this just mean packaging?
Most of our problems are upstreams changing. So we just pin to the current version of upstreams, forever.
Pip wheels might help with that
Could we make a class II FF? We have enough QC data. It costs nothing. It might require a NN framework.
You need more parameters but now we have the data to do it
Higher order terms in the taylor and Fourier series
Class 2 is a different functional form (since more angles/coordinates are involved)
Revisit LJ mixing rules
Could have an issue with codes not implementing other mixing rules
We bundle the r^4 and r^6 terms in the LJ potential, but if we incorporate the r^4 term then we would get rid of some of the issue requiring different atom types based on connectivity
I don't think a new water-ion model will work without this
This should improve generalizability
We need a dataset which would allow use to get the difference between different FF
We need the scientific evidence and then the infra will lag behind
I think I only caught part of the conversation about 1/r^4 term but I know there is a modified version of ops that has this term https://pubs.acs.org/doi/10.1021/acs.jctc.0c00847?ref=pdf this is not the FEP+ opls
When you download NAGL it takes a long time, but if we push NAGL to a cloud source it would remove this burden
Write our own solvated topology builder so we don’t depend on Packmol
Track every custom torsion built with BespokeFit and append them to a long-running OFFXML
Iterate on FF fits much faster.
Automatically type parameters with BeSMARTS
Link Smee with BeSMARTS
Do espaloma typing
Maybe we should do Espaloma typing
If we can fit a FF in a day
You can do that with Espaloma
Write minimal re-implementations of SMARTS-matching algorithms to not depend on RDKit/OEChem
Add example using Sage to score different ligand poses generated by OpenFold and some docking tool(s)
Give clear instructions for how to cite force field. This will make it much easier to measure impact.
There are tools which will do this automatically for the software
It’s unclear where in the software stack this should go
Trying functional forms with coupling terms / class 2 ffs. And adding anharmonic terms.
Getting a better benchmarkign set that uses all parameters from our FF
Doing a torsion benchmark set in conjunciton with industry similar to our optimized conformer set
Re: strike team issues - A lot of times, even after manually working on/looking at torsions, we still have issues with sulfonamides and other groups. This will persist with NAGL since it’s fit to AM1BCC.
Re-fit BCCs instead of using AM1-BCC off the shelf
Cache all AM1-BCC calculations and ship as bundled database
It’s ahrd to develop a FF with OpenFF - Toolkit is fairly rigid and it’s hard to do work unless things are laid out just the way it expects. And evaluator is fairly black-boxy. As someone who is interested in making FFs and not just applying them, I had to fork it and make changes. Like with the alternative funcitonal form proposed earlier, the initial feeling is “oh, this isn’t easy and isn’t supported” rather than “it’s straightforward to implement this yourself and test this”
I don’t like that the default charge model will become NAGL. Generally, there could be better options for disagreeing withn the “default” direction that things are going.
If smee is so much faster, can we provide ensembles of different xml files to be used for … could we improve free energy estimates?
Does the training set cover all of the parameters for the valence I think we had this issue when fitting DEXP to the sage data.
Drop ividfs
Automatic iteration process of run ~nightly benchmarks of previous iteration, identifying worst molecules/chemistries, generate new QC data, refit, repeat
Build library of monomers with pre-assigned charges
Try new charge models and funcitonal forms
Use NAGL and QM datasets to build a traffic light system for “is this molecule in a strained/happy conf”? Would be useful for evaluating protein-ligand binding poses.
Make a concerted effort to drum up materials science funding and putting together a project to build tools for this.
There’s some industry opportunities here, but less than pharma/biomedical applications. Maybe formulations work.
Might be worth looking at major charitable foundations
Evaluating our Ideas
Idea | Discussion | Effort | Value | Risk | Dependencies | Resources | Timescale |
---|---|---|---|---|---|---|---|
Add example workflows for non-standard uses |
| - | - | - | |||
Cover proteins, PTMs/NCAAs | Ongoing | High | High | Medium | |||
Expansion of virtual sites and neural network charges | Ongoing | High | High | High | |||
Expand quality to nucleic acids, lipids | Ongoing | High | High | High | |||
Co-optimized water |
| Medium | High | Medium | |||
Make a GUI/system builder |
| High | High | High | |||
Get involved directly with drug discovery |
| High | High | Low if we watch for opportunities, higher if we push it | |||
Better logging in Evaluator |
| Medium | High | Low | |||
Adding protocols for osmotic coefficients |
| Medium | High | Low | |||
Adding protocol for radial distribution functions |
| Medium | High | Low | |||
Tune ion parameters FFs |
| High | High | Low | Adding protocols for osmotic coefficients , Adding protocol for radial distribution functions | ||
ML potentials FF development |
| High | High | High | |||
Clearer communication about strategic goals |
| Low | High | Medium | |||
|
|
|
| ||||
Write our own solvated topology builder |
| High | High | Medium | |||
Automatically typed parameters with BeSMARTS |
| High | High | High | |||
Espaloma Typing |
| High | High | Medium | |||
Get a better benchmarking set which uses all parameters |
| Medium | High | Low | |||
Provide minimal estimate of uncertainty of parameterized molecule, |
| High | High | ||||
Automatic iteration process for generating new QC data and fitting a FF |
| High | High | High | |||
Docked conformation strain estimator using nagl | DC says he can take on most of the effort | High | High | High | |||
Direct polarization supported in infrastructure |
| medium | high | low | |||
Switch to using smee instead of ForceBalance |
| High | High+ | Medium | |||
Simplify representations in toolkit |
| Medium | Low | Low to med | |||
Publicly visible way of asking for support and getting tickets | Low | Low | Low | ||||
Making a polymer builder |
| Medium | Medium | Low | |||
Unified logging across more tools |
| Medium | Medium | Medium | |||
Class 2 FF |
| High | Medium | High | |||
Revisit LJ Mixing rules (Walden-Hagler alternative) |
| Medium | Medium | High | |||
Distribute NAGL differently |
| Low | Medium | Medium | |||
Track every custom torsion in Bespoke Fit and append to long running XML |
| Medium | Medium | High | |||
Minimal reimplementation of SMARTS matching algorithms |
| High | Medium | Low | |||
Add an example using SAGE to score ligand poses | Low | Medium | Low | ||||
Give clear instructions to cite the FF | Low | Medium | Low | ||||
Drop ividfs | Medium | Medium | Low | ||||
|
| ||||||
Write more blog posts | |||||||
Make it possible to fit a FF in a day or less | |||||||
Jenn’s LJ Mixing adding r^4 term | High | ||||||
Fit FF in a Day |
| Done (if we switch to smee) | |||||
Doing a benchmark set | High | ||||||
Refit BCCs instead of using AM1BCC |
| ||||||
Try new charge models with new funcitonal forms | Already in progress | ||||||
Make a concerted effort to drum up materials science funding by putting together a proposal | Out of scope for consortium, PIs should do it. And when we find people who have used OpenFF tools put them in contact with project leadership. | ||||||
Afternoon notes
Planning Our Future
Prioritize (what should we do)
Sequence (in what order)
Assign (who should do it)
Now we need to think critically and strategically about which of these items we should do.
What other things do they depend on?
Who is going to do them?
How much effort is required?
Green highlighted rows are value > effort
Red highlighted rows are value < effort
This is a really useful thought experiment that is exposing cool ideas, but also miscommunications about how different people see the same thing. We might be moving this toward too quickly toward actioning the items we discussed this morning, while more near-term high-value tractable tasks (e.g. “incrementally make torsions better”) aren’t in here. Should we add these?
The output of this table is not going to be programmatically turned into a roadmap, so we don’t need to spend a lot of time ranking effort vs value for these low-hanging fruit.
Ongoing work has already been added to this table, but common things (e.g. “release two flagship force fields per year”) haven’t made it onto the list.
We will come back later and create a clearer-eyed estimate about how much of our time should be going toward these projects. We won’t try to balance effort on them in this session
Goal for today is to leave with a shared concept of what we want to achieve and who is prioritizing which parts of that. For example, knowing direct polarization is important to some people, who it’s important to, and what support is needed.
Should we pause to discuss process for how research projects turn into core infrastructure?
Jeff: We’re going to continue to have confusion/uncertainty over how projects progress. Might be good to spend some time this afternoon on this. This is a good time to revisit this and define more of a process---for example, three phases of experimental projects that progress from research toward integration into toolkit. This exercise might make it easier for folks to better understand the process and effort required.
James: Goal is to identify ways we can rethink work practices, collaboration structures, and workflows to increase our capacity. Staff is small and limited, and scientists all have big ideas of what we’re trying to accomplish. We can probably allocate staff time more efficiently to support these things to get a multiplier effect. If staff tries to take on a huge thing (e.g. do direct polarization), it’s not going to be highly effective; but if we can restructure staff effort to make better use of the resources we have, it could potentially deliver a lot more.
Mike G : Two example science projects with different fates: NAGL is getting integrated, but Bayesian sampling didn’t make it.
John: NAGL actually came from Bayesian sampling of SMIRNOFF types!
DLM : We’ve done some amount of attempting to define the process from research → integration before, but it’s difficult to define this process universally.
Is there an idea where all the ongoing research projects get integrated?
DM – No, only the ones that look promising.
One of the ways we don’t make stuff scalable is by bringing everything in. Dropping dependencies is a better strategy.
If we did WBO again, would have spent more time derisking science before integrating infrastructure.
As we invest effort into an idea and it continues not working, estimate of risk tends to go up.
As a project becomes worth looking at integration, we define metrics it should it to be integrated
Project milestones are great, but Jeff also needs expectation of how much time he should allocate from infrastructure team
0 Comments