SB – My thinking is that there’s no major blocker to including them in Rosemary, IF they are ready. This depends both on the results of fitting, and our ability to export them to different formats.
CBy – Do we have to be “held hostage” by the lowest common denominator in FF engines?
DM – My concern is partly what Chris said, and also downstream workflows. When you shift between systems with vsites and systems without vsites, you can cause a lot of damage. Like, just switching from TIP3P to TIP4P is quite a bit of work. So even if the engine-compatibility issue goes away, we still need to consider not breaking existing workflows. So my proposal would be that we consider making two variants - “Rosemary with vsites” and “Rosemary without vsites”.
CBy – I think that’s a great idea. Would it be double the work to make two variants?
SB – I think it’s just a matter of compute, not human time. The real compute question is “how expensive are the biopolymer benchmarks?” If we only have enough compute power to do it once then we can’t make two FFs. I would stipulate that the period of time where we maintain two FF variants should be specified, and it should be short.
MS – Maintaining two force fields would take about 130% the human time of making+maintaining one.
DM – We’d probably need a period of overlap in order to do side-by-side testing and prove that the vsites are worth it
CBy – OPLS3 showed that vsites are worth it. Especially eg in binding FE calcs with pyridine nitrogens, and other places with lone-pair anisotropy. So is it really necessary to PROVE that we need vsites?
KM – We would use a FF with vsites, even if it requires more human time from us.
CS – What’s the alternative? To not do vsites at all? What would alternatively be done with the time?
DM – It’s true that we could go to vsites more directly without a lot of testing
MS – Bottleneck would be MThompson’s time in getting exporters set up
SB – The work that goes toward vsites would overlap substantially with going toward things like graph-based charge models. So we’re already putting in a similar amount of effort whether we do vsites or not.
MS – One question is whether we prove-then-implement, or implement-then-prove?
MM – Sage is already fine. If people’s workflows can’t handle vsites then they still get to work with Sage.
CS – I’m also excited to get bespokefitting to work.
MS – Does anyone on this call use CHARMM extensively? That’s one of the harder exporters to build
FP - YOLO on virtual sites -- but you'll need the data regardless before or after.
DC - I’ll take the chance to plug our preprint where we also find that vsites have big accuracy gains (in a different force field model): Exploration and Validation of Force Field Design Protocols through QM-to-MM Mapping
JW – Opportunity cost of pursuing vsites now would be delaying polarizability support and refinement of small molecule benchmarking tools.
GT – Early on when we were testing pmx with the folks in gottingen, vsites were clearly superior. I would advocate doing vsites before polarizability.
MS – I’d think at this point we could assemble a document showing which vsites we propose adding to start gathering feedback
DN – I’m somewhat concerned about how this affects the existing backlog, but I don’t fully understand the effects right now.