Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

While OpenFF has yet to move to a full neural network force field in the framework of Espaloma, it may be useful to use Espaloma as a reference, and we may be able to use Espaloma to determine areas where OpenFF parameters need improvement. For example, there may be cases where OpenFF uses one parameter to encode a particular chemistry, that Espaloma splits into many different values. Here, Trevor Gokey’s work on automated parameter generation could come in handy for partitioning espaloma data. If assigned parameter values are significantly different between Espaloma and OpenFF, that would also be worth exploring.

Experiments

Big Torsion Deviations

As a first attempt, I labeled a data set with both Espaloma and Sage 2.1.0 and compared the values they assigned. Two of the torsions, shown below, had deviations between the Sage force constant and the average Espaloma value of more than 10 kcal/mol.

...

For these torsions, I replaced the Sage value with the average value from Esplaoma in the force field, and ran benchmarks on the OpenFF Industry Benchmark Season 1 v1.0 data set, yielding the plots below. I didn't expect to see much difference from such a small change of only two parameters, but it's encouraging that it didn't ruin anything, at least. The eps-tors-10 results might even be very slightly better, as desired.

...

All Parameters

With these results in hand, I next repeated the process but replacing every Sage parameter with the corresponding average parameter from Espaloma. This probably isn’t the best approach because many of the distributions look like the one shown below: there are multiple clusters of Espaloma-assigned values, and the Sage value is out in the middle. These may be good candidates for parameters that need to be split in Sage.

...

The all parameter benchmark is still running