Small ring parameter presentation | (upload slides here) View file |
---|
name | ff_fitting_meeting_20240116.pptx |
---|
|
Slide 2 BS: Are you changing equilibrium angle? Smirks? BS: pucker in 4- and 5-membered rings? Do you know how this new parameterisation affects the barrier height of the pucker? CBy: Are we comparing bonds and angles between QM and MM in general? AMI – Not in general, but LW and I have been talking about working that kind of thing into our benchmarking suite. Until that gets in the benchmarking suite, my current dashboard is fairly straightforward to use. In the longer run this could get into the general benchmarks. CBy – The plots you showed of equil angle shines a light on a problem which you could fix nicely. I wonder if we could have red flags raised when there’s deviations between MM and QM geometries. That is, I wonder if a good use could be to identify pathologies where there are substantial differences. AMI – Right, that’s an active area that we want to pursue. Hopefully we can get it implemented soon.
LW – In some ways, the RMSD metric and internal RMSD looks at bonds and angles, but since we don’t break it down per-parameter it wouldn’t find what Lexie found here. But that could be broken down further. TG: were you just refit new parameters, or doing a new re-fit? AMI: re-fitting everything TG: curious to see how much everything else shifted? AMI: took a quick look, having trouble meaningfully looking at torsions due to scale of force constant. From a quick skim the parameters I changed, changed the most. TG: you could look at mval change, as they’re normalised by priors.
JW: just to check, the ddE etc metrics you showed earlier was for entire test set?
JW: going back to the ddE distributions – why do you think molecules without new small ring parameters get better? AMI: I think a lot of molecules in dataset have a ring, which might affect distribution shape. JW: Could also be from liberating non-small-ring angles from ring training data
JW: the right side of the peak all look fine, do we know why the left side of the peak improves with new FFs but not right? CBY – Thoughts on order dependence when adding new parameters? I expected “ff 1 v 2” to be a big improvement, but it’s not. Sees like when adding new params we should aim for “the targeted chemistry, the whole targeted chemistry, and nothing but the targeted chemistry”. It’s hard to separate out the overall benchmarking results from “improving things because the new parameter does better” vs. “the new parameter is bad but it removed bad training data from the more common parameter”. How can we defend against this? AMI – I’m coming around to doing more data visualization. Like BW’s dashboard. But broadly I don’t have ideas about how to defend against errors introduced by reordering. CBy – MAybe, when you mean to include a particular chemistry and not another chemistry, you could try making SMIRKS to force that (like a big string of noncapturing wildcards SMIRKS atoms/bonds that form a loop to force matches only to endocyclic bonds) CBy – I’m generally happy that you’re trying adding new FF parameters and hope you keep it up. Would love to see more torsions getting added eventually.
TG: with the splits that you did, did you copy the parameters over? TG: when I was doing these types of fits a while ago, the H thing came up and a very common split was C-C-H, so potentially just a *-*-H would work. AMI: agree, noticed H splitting in other parameters too TG: I can’t recall if I noticed this in rings, but just a suggestion for general improvement
CBy: so at the end of the day, is FFv1 the best candidate here?
|