Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Current »

Participants

Discussion topics

Item

Notes

Brainstormed ideas

  • Details for what we want:

  • LW – A conda installable thing that eats a force field and returns summary stats, and “just works” in vanilla mode

  • LW – Should have a plugin interface

  • JW – Preprogrammed “datasets” for each “target”, with current standard metrics (but which can be updated and should remain forward-compatible, like new versions can run historical “standard” metrics)

  • A Python API that can be extended to a CLI

  • Modular design - Workflow should easily be able to dump state to disk between stages

  • Some way to specify metrics/targets

    • Electrostatics stuff

      • E fields, ESP grids from recharge

    • Geometry comparison

      • TFD and RMSD, from openff-benchmark

    • Energy comparison

      • RMSE from openff-benchmark

    • Density, Hvap, Dielectric, all the other stuff from openff-evaluator

    • (Optional) ForceBalance objective function

      • (There will be a bunch of knobs and dials for weights and other objective function calculation details)

  • For each target, some way to specify datasets:

    • For QC datasets,

      • load a whole dataset verbatim from QCA

      • load a whole dataset from QCA and then applying qcsubmit filters

      • Loading from local files

      • (Optional) Submit calculations to be run (superseding infra from openff-benchmark)

    • For phys prop datasets

      • Read in from csv

      • Read in from pandas

      • Read from remote archive (examples from previous work?)

  • For computation backends:

    • Instructions on how to set up evaluator for distributed compute. ideally with a small example case like “get this two-simulation job working on two GPUs and then your environment+config file should be ready to go”. This should be aimed at INTERNAL people so it can be a bit sparse/require some hacking, but it should act like a quick-start guide for a lab joining OpenFF to get evaluator running.

    • A docs link to this page from the openff-benchmarks material showing how to set up QCF for distributed compute in a variety of contexts

    • (something something submit to F@H something something, details forthcoming)



Action items

  •  

Decisions

  • No labels