Skip to end of metadata
Go to start of metadata
You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
Version 1
Current »
Participants
Discussion topics
Item | Notes |
---|
Brainstormed ideas | Details for what we want: LW – A conda installable thing that eats a force field and returns summary stats, and “just works” in vanilla mode LW – Should have a plugin interface JW – Preprogrammed “datasets” for each “target”, with current standard metrics (but which can be updated and should remain forward-compatible, like new versions can run historical “standard” metrics) A Python API that can be extended to a CLI Modular design - Workflow should easily be able to dump state to disk between stages Some way to specify metrics/targets Electrostatics stuff Geometry comparison Energy comparison Density, Hvap, Dielectric, all the other stuff from openff-evaluator (Optional) ForceBalance objective function
For each target, some way to specify datasets: For QC datasets, load a whole dataset verbatim from QCA load a whole dataset from QCA and then applying qcsubmit filters Loading from local files (Optional) Submit calculations to be run (superseding infra from openff-benchmark)
For phys prop datasets
For computation backends: Instructions on how to set up evaluator for distributed compute. ideally with a small example case like “get this two-simulation job working on two GPUs and then your environment+config file should be ready to go”. This should be aimed at INTERNAL people so it can be a bit sparse/require some hacking, but it should act like a quick-start guide for a lab joining OpenFF to get evaluator running. A docs link to this page from the openff-benchmarks material showing how to set up QCF for distributed compute in a variety of contexts (something something submit to F@H something something, details forthcoming)
|
|
|
Action items
Decisions
0 Comments