Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Participants

Goals

Discussion topics

Item

Presenter

Notes

Conformer minimization reproduction

LM

  • View file
    namestructure_compare_off200.pptx

    • CB: what’s the criterion for a match?

      • LMI: 0.4 A between MM and QM structures

    • LPW: It should be possible to "deactivate" geomeTRIC convergence criteria if you run it from the command line by passing in very large values of the criteria, for example "--converge dmax 1 drms 1" will set the max displacement and RMS displacement criteria to 1, effectively satisfying them all the time so they are no longer criteria.

      • LPW: you have to put these arguments at the end

      • One more note is you can avoid jumping over minima by setting the maximum displacement for steps taken in the optimizer. Something like "--tmax 0.03 --usedmax yes" will ensure that no atom moves by more than 0.03 A in an optimization step.

    • BS: goal is to get the minima on the MM surface close to the QM system

      • CB: that’s how I begin to think about it, but at the end of the day we want all the minima and nothing but the minima. Does the PES qualitatively differ?

      • CB: looks like LM’s saying that your MM minima are more stable than the QM minima. They may not necessarily exist on the QM surface. I think we want to compare matching minima

      • LM: all of Bill’s minima seem to also be minima on the QM surface

      • CB: but your MM minima is also on the QM surface, it’s just that Bill’s optimizer is leaping into a different minima. So the former is a huge relief to hear. Question is, do we want to investigate Bill’s additional minima?

      • BS: my analysis had 2 stages: 1. was this diagnosis, 2. was matching up structural minima – then we can look at energy differences.

      • LM: yes, the next step of this project is to compare the different minima

  • LM: my understanding is that internal coordinates are more robust than Cartesian, so I’m not surprised geometric might be finding a more global minimum

    • CB: I would usually do my minimisations in a restrained way. Could a quick solution be doing two optimisations instead of one?

    • DLM: is this just treating the symptoms instead of the disease? Isn’t the problem that the energy surface may be too different?

  • LM: since we’re starting from a QM structure, minimising in MM might bump it into a different minimum with gradients

  • LPW: I think it’s possible one of two things could be happening.

    • 1. Geometric may be jumping over a barrier that OpenMM is not, as LMI has summarised so far

    • 2. The PES may become very flat. Even if geometric isn’t jumping over any barriers, due to geometric’s tighter convergence criteria, it has potential to move very far along the PES. Geometric is based on the kind of optimizer used in DFT. The force is often far under the force criterion, due to satisfying the other (e.g. energy) criteria.

      • CB: in this scenario, at least the ddE wouldn’t be very different between the conformers

    • I’d propose doing an experiment where you turn off all the non-force criteria.

      • LM: I’ll have a look.

  • TG: my only comment, given the age of the industry benchmark and the changes in the toolkit, I would look at charges

    • LMI: I used ambertools AM1-BCC

    • CB: OpenFF has found that the AM1-BCC implementations have discrepancies

  • LMI: as we move forward with this kind of analysis, are we happy to keep using OpenMM’s optimiser, vs geomeTRIC’s? I figure ELF10 charges are the best to use since that’s what Sage is trained on

    • BS: I think this is really important as that’s how we compute ddEs and it affects the ddE metric

  • CB: two situations… (~30 min in)

    • BS: if there was a minimum on QM surface that was missing with MM, or if two QM minima converge to the same minimum in MM

    • CB: we don’t search QM minima exhaustively, we shouldn’t chase after all minima. We should just try to look at the closest MM minima to those.

    • LPW: I agree the scenarios CB outlined are serious problems. I think there’s a couple things to consider in choosing the minimiser. I think the geometric optimiser is a bit more reproducible (e.g. can choose step size, convergence criteria, and do so in a way that avoids jumping over barriers) My sense is that the OpenMM minimiser was not originally designed with reproducibility in mind. OpenMM recently added reporters to track MM progress. OpenMM is also is much faster

  • CB: does LM optimizer start out with steepest descent or BFGS? Is there a switching function where it starts out more steepest descentish and ends up more second derivativish?

    • LPW: yes

    • CB: can I adjust how quickly it moves from steepest descent?

    • LPW: you should be able to set this on command line, as above

    • LM: will give this a try

    • Chris: What you described in LM is exactly what's implemented in geomeTRIC. There is a "knob" that is the factor of the identity that is added to the Hessian before inverting it. Larger factor -> more steepest descent-ish. The user sets the maximum step size and the factor is determined automatically during the calculation.

  • LM – Another issue is “what will people be using with the FF?” Should we benchmark to GeomeTRIC vs. OpenMM minimizer?

    • DLM: I think this is separate, a lot of what we’re doing with FFs is dynamics that explores a lot of PES. What we’re after is just checking the PES matches QM PES as close as possible, which means we want to be able to assess the same parts properly

  • BS: once we get structurally aligned conformers, we also need to compare ddEs.

    • LMI: my structures were more similar to QM structures, so it was hard to get a 1-to-1 mapping between results as I had way more matches and data. Some ddEs are better, some are worse. Not a straightforward change in ddE metric. May have improved a little.

  • LPW (chat) – Chris: What you described in LM is exactly what's implemented in geomeTRIC. There is a "knob" that is the factor of the identity that is added to the Hessian before inverting it. Larger factor -> more steepest descent-ish. The user sets the maximum step size and the factor is determined automatically during the calculation.

Polarisability

TG/WW

  • Slides will be linked here

  • TG: I built and clustered a SMARTS tree for WW’s polarisabilities

  • WW (Slide 1) – Just want to clarify that the “one param per element” approach worked well on small dataset with 4 elements, but not so much on larger one with 9 elements. Sage types worked better on a larger set.

    • PB – These are polarizabilities based on LJ SMIRKS?

    • TG – Yes

    • CB – So, performance on “sage” is on 525 mols. What was it trained on?

    • WW – Test set is subset of training set.

    • CB – You’d think, if you trained elements…

  • TG: any suggestions to limit negative polarisabiliies?

    • BS: the underlying charge model may be overpolarised, so the negative polarisabilities may be trying to compensate for that

    • WW – In this special fitting method, there is no underlying charge model. We just impose electric field on the molecules and fit to result. So we assign base charges based on RESP and then assign polarizability based on how the e- density looks after applying external electric field

    • CB – we first subtract the ESP/EF applied …

    • DLM – is this similar to the problems faced by RESP? If you don’t restrain charge on buried atoms, they can end up weird

    • CB – In the first case (bromothiophene), one was +44 and the other was -44? That’s exactly the kind of symptom I was seeing back in the day, where fitting matrix was ill-conditioned. That could be rectified by some kind of ridge regresson or hyperbolic restraints. You can idenify these cased with singular value decomposition. In the second cases, with nitrogens, it’s not clear to me that it’s a matter of an ill conditioned matrix. So that’s harder

    • BS – Re: methodology - You take a gas phase mol, calculate e- density, then apply E field, then calculate e- density again and fit to what changes. Is it possible that the E field affected the hybridization state/resonance? Polarization couldn’t capture the formation of a zwitterion for example.

    • WW: this is what the model is capturing

    • DLM: I think what BS is saying that polarisability may not capture if, e.g. charge has been moved from one end of a mol to another and changed the whole bonding pattern

    • CB – I’d argue that the phenomenon you’re talking about IS polarization - movement of density aorund a wavefunction in response to an E field. But if the consequences of that are dramatic then we can’t expect a simple model to predict that. So a point-centered dipole may not be sufficient to reflect this effect. That would require something like gasteiger-Marsili method… MM world COULD allow electrons to move around if adopting the G-M or (QEQ?)method

    • DLM: maybe concrete thing to do here is to see if there’s a big change in resonance or bond pattern when e-field is applied… Could use something like the connectivity rearrangement checks/bond order calcs to view the change.

    • CB: I would suggest looking at population densities

    • BS + DM – Partial bond orders would work too.

  • JW: does the geometry change when you apply e-field?

    • WW: no

  • CB: given a model that doesn’t allow for mol-wide e- movement, can we allow it to perform numerically better by allowing nonphysical polarisability values?

    • BS: I think negative polarisabilities are a big problem for a number of reasons

  • DM (chat) – I think it’s almost a perfect parallel to the “torsion fits shouldn’t include breaking and forming bonds” issue — there, we have to throw out cases where bonds broke and formed when we drove the torsion.

    Here, there is no breaking and forming bonds. But there’s charge moving around, and when the charge moves “too far” (in a way we don’t expect/want our model to be able to capture) then we have to filter out those cases before fitting.
    We only can fit the model to cases it is supposed to be able to cover

  • LPW – Yeah, one thing I’ve learned is that FB fitting often results in unphysical params. So it can be necessary to have regularization (like hypterbolic restraints in RESP), or to have the physics parameter be a function of the fitting parameter to ensure it has a positive value.


Action items

  •  

Decisions