Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

While OpenFF has yet to move to a full neural network force field in the framework of Espaloma, it may be useful to use Espaloma as a reference, and we may be able to use Espaloma to determine areas where OpenFF parameters need improvement. For example, there may be cases where OpenFF uses one parameter to encode a particular chemistry, that Espaloma splits into many different values. Here, Trevor Gokey’s work on automated parameter generation could come in handy for partitioning espaloma data. If assigned parameter values are significantly different between Espaloma and OpenFF, that would also be worth exploring.

Table of Contents

Data

This PDF contains histograms comparing all of the espaloma-assigned values for a given Sage parameter to the value of the Sage force constant.

View file
namemain.pdf

However, a much more useful way of looking at this data can be found in this repository. After installing the corresponding conda environment, running make will build the necessary data files, and then running either python board.py or python twod.py will allow you to to view the data interactively in a browser. Below is an example of the interface taken from the README.

...

Espaloma Benchmark

The figure below shows a benchmark comparing Sage 2.1.0, the Sage 2.1.0 force field with many force constant values taken from the average Espaloma values, and Espaloma itself. Espaloma performs very well.

...

Experiments

Big Torsion Deviations

As a first attempt, I labeled a data set with both Espaloma and Sage 2.1.0 and compared the values they assigned. Two of the torsions, shown below, had deviations between the Sage force constant and the average Espaloma value of more than 10 kcal/mol. These correspond to torsion IDs t129 and t140, respectively.

...

For these torsions, I replaced the Sage value with the average value from Esplaoma in the force field, and ran benchmarks on the OpenFF Industry Benchmark Season 1 v1.0 data set, yielding the plots below. I didn't expect to see much difference from such a small change of only two parameters, but it's encouraging that it didn't ruin anything, at least. The eps-tors-10 results might even be very slightly better, as desired.

...

All Parameters

With these results in hand, I next repeated the process but replacing every Sage parameter with the corresponding average parameter from Espaloma. As shown below, the results are more different from the esp-tors-10 results, as expected. And positively, esp-full appears to perform a bit better by all three metrics. This is without any re-fitting, so Espaloma’s average parameters for our SMIRKS patterns perform slightly better than our re-fit Sage 2.1.0 values.

The results above were actually still relying on the code that identified the large deviations from Sage parameters, so not all parameters were replaced. Additionally, that code only looked at force constants, and only the first force constant for torsions. In contrast, the graph below shows the esp-full-full (really full) results, where all of the values have been replaced for all force constants and equilibrium bonds and angles. Clearly these look much worse for the Espaloma values within the SMIRNOFF format. This makes me a bit suspicious of the torsions especially because those values look the most different from Sage at first glance, at least compared to the bonds and angles, which are more recognizable.