2021-04-30 QCA User Group meeting notes

Participants

  • Ben Pritchard

  • @Joshua Horton

  • @Pavan Behara

  • @Trevor Gokey

  • @Simon Boothroyd

Goals

  • Updates from MolSSI (public server, upcoming changes, releases)

  • User/submitter questions

  • Infrastructure topics

Discussion topics

Item

Presenter

Notes

Item

Presenter

Notes

Updates from MolSSI

Ben

  • BP: In terms of production, nothing new.

    • Do have upcoming changes in next branch.

    • Aiming for a portal release soon.

  • In plain Dataset, status gives lots of NaNs for INCOMPLETE status.

  • One thing added that you might be interested in: if there’s an InternalServerErrror, this will be stored in the DB in a table; if you’re running your own server this is queryable using FractalClient. On public instance, allows me to find these without trawling the logs.

  • Really been doing a ton of backend work, simplifying things, hopefully making it faster

    • clarity will make it easier to address performance bottlenecks, add new features

Validation clarity

Simon

  • qca-dataset-submission#203

    • Line 75 of validation.py

    • @Joshua Horton will take on, probably want to move some of this validation logic into openff-qcsubmit so it’s more usable by users locally

    • SB: want the validation to include a Details drop-down with indications on which index failure occurred on for each red checkmark

 

 

  • JW – I’m unable to reproduce Thomas Fox’s issue from benchmarking. Ideas for paths forward?

    • SB – I have a molecule that seems to have the same issue.

How to add more compute specs to an existing dataset

Pavan

Can I run a set of hessians similar to adding compute spec?

Trevor

  • DD – Not really, since the starting molecules are not the same as the initial dataset. I think we could implement this as a Github Action that launches hessians as the optimizations complete. So, each time the action runs, it would take the completed optimizations and start a new hessian for it.

  • TG – This sounds great, though it conflicts a bit with the idea of versioning in the dataset standards

  • DD – You could say that “the hessian dataset is tied to its optimization”

  • JW – Could also have the action populate another dataset with -hessians appended.

  • TG + DD – That’s how we currently do it.

  • DD – Could do it either way, but I’m somewhat partial to extending the existing dataset

  • TG – I feel weird about messing with state after creation, but we already do that a lot, so maybe that’s not too bad

  • SB – From the researcher’s side, I think it’s OK to have these state changes. We are switching to a model of recording the state of the dataset by enumerating the record IDs that we pull down.

Hessian convergence

Horton

  • JH – We’ve previously run hessians to “gau” convergence, but I think we’ll want to run to “gau tight”, also want to make integration grid fineness higher. In my experience this is necessary to get good results.

  • TG – That’s not a setting in the hessians, that’s a setting in the optimizations. But in the big picture, this will depend on whether our hessians contain imaginary frequencies. Imaginary frequencies shouldn’t exist if we’ve really converged.

  • DD – HJ is preparing an optimization dataset right now – Could one of you take a look at whether she’s using these tighter convergence criteria once the PR goes up.

  • TG – Not sure that HJ’s dataset will require this.

  • DD – So, let’s not change HJ’s dataset for now, and investigate the outputs to see whether they’re visibly bad.

  • TG – JH, is convergence criteria set in GeomeTRIC?

    • JH+DD – Yes

    • TG – If we’re going to put the convergence criteria tigher, should we make the wavefunctions tighter as well?

    • JH - Definitely agree on a finer grid, that should help convergence.

  • JW – Submit one or two datasets (with original vs. tight criteria)?

    • DD – I’d think we should do two, that way we can study the result of the changes.

    • SB – HJ will submit more torsiondrives on the recently-submitted dataset for Sage, that should be very high priority. After that there will be a optimization/hessian set, and I’m not sure which molecules will be in that

    • DD – When Hyesu has prepared her optimization dataset, we’ll do one with tight criteria, and the other with the standard. The standard one will be high priority initially. Once it’s done, the tight-criteria one will become high priority.

Irregularities between optimizations generated from torsiondrives

@Simon Boothroyd

  • SB – Issue 540 of QCF

  • SB – Not sure whether this is outdated – would like to ensure that this isn’t still happening

  • DD – Main issue here is “atomicity” regarding how the manager talks to the server, and the server talks to the database. There are opportunities for race conditions in these two paths. So this is a situaiton where data that should have gone into the history gets dropped, and causes a cascade of dropped info. So this may go away with our current updates. This is a hard issue, because it’s a random race condition, so it’ll be hard to reproduce and measure. But these things should cease to happen over time as BP does work.

  • SB – This isn’t a blocker, but I’m worried about having a different number of entries and minimizations, and getting something like off-by-one indexing errors, or fitting to the wrong minimum. We don’t currently have an automated way to filter out these cases.

  • SB – For now we’re going to take the results at face value. I think a fix for this would be on the QCF side.

  • TG – In my experience, the thing that breaks is the table lookup, where the optimizations worked, but when the torsiondrive tried to collect everything, the table indexing is off. So I do manual energy checking myself to detect this.

  • DD – If there is an incomplete history, is the final molecule correct?

    • TG – So, this is about finding the lowest-energy conformation at each grid point.

    • DD – Ah, I was thinking about it the wrong way.

  • SB – Do the final molecule and the final energy come from the same thing?

    • TG – I think the final molecule+energy is legit, but the torsiondrive grid point table doesn’t report the actual minimum.

    • SB – So, that’s reassuring. Even if the structure isn’t the true minimum, the returned geometry DOES correspond to the appropriate energy.

  • DD – Does this always show up as an error? Or is it ever silent?

    • TG – Both. In some cases, it’s an indexerror in the lookup. But sometimes it is silent.

    • SB – Yeah, there are two failure modes – Some silent, some loud.

    • TG – Best way to debug it is to see “who ran it”? When Levi saw this, it was coming from a dirty branch of some QC* (QCEngine?) deps. This was from a long time ago.

  • Loud case:

    • TG – The big thing is to take a torsiondrive and call get_minima_positions, and the “loud” ones will segfault.

  • Silent case:

    • TG – This appeared when Pavan ran a query that returned None – I’llt ry to find this in Slack.

    • td.dict()['minimum_positions'] will be incorrect

    • DD – This looks like a race condition

Action items

@Joshua Horton will address qca-dataset-submission#203, which may include moving some of the validation functionality further into openff-qcsubmit; primarily want reporting of errored indices for each validator error type
@Simon Boothroyd will share case that failed validation with @Jeffrey Wagner to see if this reproduces issue identified by Thomas Fox
@David Dotson will implement mechanism for creating Hessian point calculations from an OptimizationDataset submission, executed via Github Actions, triggered via submission label; must be an iterative approach that adds new calculations from source OptimizationDataset as the optimizations complete
@David Dotson will ask for Ben’s attention on QCFractal#540

Decisions