Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

\uD83D\uDDD3 Date

\uD83D\uDC65 Participants

\uD83D\uDDE3 Discussion topics

Item

Notes

General updates

Suggests cancelling next week (MT offline)

Versioning

Good idea to start making releases? (Just git tags for now - no conda packages.) Would start at 0.0.1 and probably increment a patch version every small number of PRs

Commit hashes should be sufficient for reproducibility but not best practice

I propose versioning and making new tags every few commits

LW – Sounds good to me

Plan: Do this

QCArchive 0.50 or Espaloma

Espaloma won’t be compatible with 0.50 soon, but would like ibstore to be. Ok dropping Espaloma tests/etc in this period of time? Could still drop back to an older version to make Espaloma comparisons.

  • Espaloma doesn’t need “0.50” in its API but needs it in tests (and might need it for something)

  • Plan:

    • Update ibstore to use QC* 0.50+

    • (optional) See if espaloma’s feedstock can do without qcportal=0.15 constraint

  • JW – I kinda doubt espaloma will get updated. But as a workaround anyone who needs it can make espaloma systems in a different env and serialize systems/compare numerical output.

  • LW – Can also install from source to avoid dep conflict

Trello update

  • https://trello.com/b/dzvFZnv4/infrastructure?filter=label:benchmarking

  • MT – Design questions:

    • Which datasets to use for testing?

      • MT – I want to make a utility that helps makes json blobs containing ib databases. These would be stored in the repo. Would like them to be the ibstore database format.

      • LW – So you want to construct datasets on the fly and save them. Probably simplest to use json blobs early on, even though they’re not space-efficient.

      • MT – So save them or regenerate when running tests? You say save them.

      • LW – May be hard to automate for every scenario. Every dataset will have different reqs

      • JW –

      • MT – Talking about

    • How should users be able to access entries in datasets?

    • How to create new datasets on the fly (should make a utility for this)?

    • MT – Could mock qcsubmit’s dataset.submit to get data

    • LW – No need to compute at all, could just save the qcsubmit collections that result.

    • JW – Can show how to run simple local jobs on QCF, we do that in qcsubmit’s tests

    • MT – I’m thinking of not even really running stuff. Rather mocking the .submit call

    • JW – Wouldn’t need to mock .submit, instead could mock the dataset retrieval call.

    • MT – Would need to run .submit to get a template

    • JW – Could just run things using the rdkit force field, or sage, to get a real looking set of mockable data.

    • MT – I’ll probably work on updating to QC* 0.50 first, and then I’ll poke around with trying to mock the return values. Would love a sort of migration guide/blog post on how to update QCSubmit code.

✅ Action items

  •  

⤴ Decisions