...
Red highlighted rows are value < effort
This is a really useful thought experiment that is exposing cool ideas, but also miscommunications about how different people see the same
...
thing. We might be moving this toward too quickly toward actioning the items we discussed this morning, while more near-term high-value tractable tasks (e.g. “incrementally make torsions better”) aren’t in here. Should we add these?
The output of this table is not going to be programmatically turned into a roadmap, so we don’t need to spend a lot of time ranking effort vs value for these low-hanging fruit.
Ongoing work has already been added to this table, but common things (e.g. “release two flagship force fields per year”) haven’t made it onto the list.
We will come back later and create a clearer-eyed estimate about how much of our time should be going toward these projects. We won’t try to balance effort on them in this session
Goal for today is to leave with a shared concept of what we want to achieve and who is prioritizing which parts of that. For example, knowing direct polarization is important to some people, who it’s important to, and what support is needed.
Should we pause to discuss process for how research projects turn into core infrastructure?
Jeff: We’re going to continue to have confusion/uncertainty over how projects progress. Might be good to spend some time this afternoon on this. This is a good time to revisit this and define more of a process---for example, three phases of experimental projects that progress from research toward integration into toolkit. This exercise might make it easier for folks to better understand the process and effort required.
James: Goal is to identify ways we can rethink work practices, collaboration structures, and workflows to increase our capacity. Staff is small and limited, and scientists all have big ideas of what we’re trying to accomplish. We can probably allocate staff time more efficiently to support these things to get a multiplier effect. If staff tries to take on a huge thing (e.g. do direct polarization), it’s not going to be highly effective; but if we can restructure staff effort to make better use of the resources we have, it could potentially deliver a lot more.
Mike G : Two example science projects with different fates: NAGL is getting integrated, but Bayesian sampling didn’t make it.
John: NAGL actually came from Bayesian sampling of SMIRNOFF types!
DLM : We’ve done some amount of attempting to define the process from research → integration before, but it’s difficult to define this process universally.
Is there an idea where all the ongoing research projects get integrated?
DM – No, only the ones that look promising.
One of the ways we don’t make stuff scalable is by bringing everything in. Dropping dependencies is a better strategy.
If we did WBO again, would have spent more time derisking science before integrating infrastructure.
As we invest effort into an idea and it continues not working, estimate of risk tends to go up.
As a project becomes worth looking at integration, we define metrics it should it to be integrated
Project milestones are great, but Jeff also needs expectation of how much time he should allocate from infrastructure team