Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Current »

Participants

Goals

  • Updates from MolSSI

    • BP & DD production testing the new QCF rc.

  • New submissions

    • SB’s single points with wavefunctions, OpenFF ESP Fragment Conformers v1.0, 63K small molecules (~<15 heavy atoms).

  • User questions/issues

    • pubchem sets 2-6 resubmission/record modification to remove wavefunctions

  • Science support needs

  • Infrastructure needs/advances

Discussion topics

Item

Notes

Update from MolSSI

  • BP – Tested new branch last week, went pretty well.

  • BP – Some trouble with pubchem as reported by DD. I contacted our hosting provider about more storage space but I haven’t received reports of downtime.

    • (DD – In submitting pubchem sets 2-5 last week, I forgot to pull changes from master, and ended up submitting them from my local box with wavefunctions still on)

    • DD – How is storage looking on the host? Retagged several pubchem sets.

    • BP – Not looking good. It went up another 200GB. At this pace we’d run out of space in a month.

    • DD – I’ve tagged them as openff-defunct, but it may take a while to get that tag change propagated to all ~400k entries, and ones that are already in progress won’t get updated.

    • BP – I can run a script on the server itself. Also, do you have any managers without any tags? Because those could be running the defunct jobs.

      • (General) – We don’t have any managers running with no tags.

    • PB – There was a set of 50k mols submitted over the weekend with wavefunctions and a smaller basis set. Though that’s basically already done

    • BP – In a pinch, I could move some datasets to tables on separate storage devices, which could give us access to mroe space. But that would be a bad long-term solution.

    • DD – Would it be possible to delete the results of some of the previous jobs?

      • BP – In an emergency we could, but that would be my last resort.

    • DD – I’ll send BP the defunct-retagging script for local execution on the server. I’ll also set the defunct sets as low priority.

    • DD – Trying to resubmit them without wavefunctions would hit problems with deduplication, where since the wavefunction requests aren’t part of the submission spec, the new submissions would just trigger wavefunction calcs again.

    • BP will look into how to attach more storage to QCA.

    • DD: So for now let’s put 2-5 on the backburner and let the compute chew set 1 and other datasets until we get the issues with space resolved.

  • DD – PRP errors with pycpuinfo, doesn’t give back brand or brand_raw queries. PR on qcfractal to ignore that (PR #700).

    • PB – More details on this?

    • DD – PRP runs everything through docker. When I looked into which nodes had the issue, there wasn’t a clear trend.

    • JW – Might conda be resolving the envs differently? I remember that there was a breaking change in the pycpuinfo package a while ago, so maybe they’re getting different versions?

      • DD – I don’t think it’s this - The docker containers all use an identical image.

    • BP – I’d check into the conda channels that are being checked - Looks like priority recently rearranged.

  • PB – So, we’ll put #225 (pubchem 2-5) on the backburner, and work on internal scientific sets in the meantime?

    • DD – Yes.

    • PB – I’ve put these into “requires scientific review” column. Was that right?

    • DD – Yup!

New dataset submission

  • PB – I submitted SB’s set, but actually cancelled the submission action the first time, then reran it. Would this make any issues? It seems OK, I started seeing completed jobs about 20 minutes after submission.

    • DD – Nope, it looks fine.

    • SB – Thanks for the quick action, and getting everything pushed through

  • SB – I’m interested in doing a similar set for dipeptides all permutations of AAs. Would do HF31G* preoptimization and then wavefunction calcs. Would that be of interest, CC?

    • CC – Sure, but our first charge model for rosemary is planned to be AM1-ELF1.

    • SB – My understanding of rosemary charges is that we’d do cap and charge as a last resort, library charges from AM1 or RESP, and graph net AM1 + BCCs as a first choice.

  • CC – Do we have infrastructure to do torsion REstraints instead of CONstraints?

    • BP – I’m not sure but I doubt it. The answer would be in geometric.

    • SB – Agree, REstraints in geomeTRIC aren’t implemented/possible. But if you contact Lee Ping Wang, it may be simple for him to implement this.

    • CC – Could we do REstraints in optking in psi4?

    • SB – I looked through the docs, and it looks like that’s a CONstraint.

    • CC – So the best path would be talking to LPW about putting it into GeomeTRIC?

      • SB – Yes, probably wouldn’t be a ton of effort.


Action items

  • David Dotson will send Ben a script for retagging, reprioritization to low of pubchem sets 2 - 5 for storage issue mitigation
  • David Dotson will work with Ben to finish out QCFractal#700 for manager graceful shutdown, blocking higher Lilac use
  • David Dotson will finish QCEngine#339, blocking reliable PRP use

Decisions

  • No labels

0 Comments

You are not logged in. Any changes you make will be marked as anonymous. You may want to Log In if you already have an account.