Driver | The main driver(s) or executor(s) of the project |
---|---|
Approver | The person who gives final approval on the project |
Contributors | People who contribute work or discussion to a project, e.g. would be credited on any released product or manuscript. |
Other stakeholders | People who should be kept informed of project updates, e.g. should be invited to relevant meetings. |
Objective | Summarise the objective in 1-2 sentences, e.g. "a force field with a set of virtual sites", or "a lipid force field" |
Time frame | Expected time frame for the project to be finished, e.g. Q1 2025. We expect to revisit this as the project progesses, but aim for a realistic estimate that allows for iterative trials. |
Key outcomes | The key outcomes or deliverables from this project. Be specific about the features and attributes that describe a successful outcome. Use dot points to be concise. e.g., a force field including: |
Key metrics | The complete suite of benchmarks you will use to measure success. e.g., improved or equivalent performance on valence benchmarks to the Industry Benchmark Set, improved perfomance on a curated set of solvation free energies of FreeSolv and MNSol, ... |
Status |
|
GitHub repo | A link to a GitHub repo containing work on the project |
Slack channel | the go-to Slack channel for discussion about this project |
Designated meeting | The go-to meeting for discussion and updates about this project |
Released force field | The first released force field this work appears in, or N/A if the project is ended due to poor results. |
Publication | The publication on the project, if any. |
Get Evaluator running on Kubernetes smoothly, such that we can practically use it for a vdW fit.
Must have: |
|
---|---|
Nice to have: |
|
Not in scope: |
|
Currently the way an Evaluator fit works is that it gets started on a local laptop with ForceBalance, which under the hood uses an EvaluatorClient to communicate with an EvaluatorServer. On an HPC the EvaluatorServer can be local too, but it appears intended to be set up to be remote. What is most important is that the EvaluatorServer has access to the same filesystem as the Dask workers. The Dask cluster is set up via the calculation_backend.
To deploy on NRP:
As mentioned, the EvaluatorServer must have access to the same file system as the workers. That means it must also be remote. It is a separate Kubernetes item to a DaskCluster. Ideally it is a deployment. I’m not sure the consequences if the connection between an EvaluatorServer and local Client is broken, but with progress saved on the shared filesystem and locally there should be ways to keep going.
We don’t have permissions to set up a Dask Cluster through the EvaluatorServer on a remote deployment, because we don’t have permissions to set up a dask cluster on an existing k8s pod. That means we have to create two Evaluator backends:
The “real” one, which deploys the workers and scheduler on NRP and sets up adaptive scaling (e.g. https://github.com/lilyminium/openff-evaluator/blob/368341c3c465e5269508906e9c3ef8623d7fa9ae/openff/evaluator/backends/dask_kubernetes.py#L218)
The communication one for the EvaluatorServer, which simply connects the server to the existing DaskCluster on NRP through the scheduler port (e.g. https://github.com/lilyminium/openff-evaluator/blob/368341c3c465e5269508906e9c3ef8623d7fa9ae/openff/evaluator/backends/dask_kubernetes.py#L359)
Step-by-step the process is:
Create a PVC on NRP to serve as the filesystem
Create and start a DaskCluster on NRP, with the PVC mounted
Create a deployment with an EvaluatorServer that connects to the DaskCluster scheduler
Start the EvaluatorServer
Port-forward the EvaluatorServer port so ForceBalance can connect to it via localhost
Run ForceBalance
(optional) port-forward the dashboard to monitor jobs
Stopping:
Stop the EvaluatorServer and DaskCluster (order doesn’t matter)
Stop the PVC
Resources
The current way I’ve been partitioning GPU/CPU jobs is with resources (https://distributed.dask.org/en/latest/resources.html). I have been clumsily hardcoding --resources GPU=1,notGPU=0
and --resources GPU=0,notGPU=1
onto my GPU/CPU workers respectively, and specifying resources for individual tasks: https://github.com/lilyminium/openff-evaluator/blob/368341c3c465e5269508906e9c3ef8623d7fa9ae/openff/evaluator/backends/dask_kubernetes.py#L186-L203 . Another way to do it, I believe, is just by specifying which workers are allowed to act on the task.
Adaptability
Currently I’ve been creating one DaskCluster with the “default” GPU worker group and an additional “cpu” worker group. However, adaptive scaling can only apply to the default group. If we wanted adaptive scaling for the CPU worker group too, what may be cleaner and more elegant is having separate clusters for each worker type. This may not be possible with how the tasks are linked, however. Here is a short discussion on the merits (https://github.com/dask/dask-gateway/issues/285)
Use the "Science Project Phase Plan" template to create child pages under this one to document each phase of the project. They will be automatically listed below.
I ran a proof of concept with python run-job.py
:
It copies across server-existing.py
for the EvaluatorServer
The spec of the DaskKubernetes cluster is almost fully documented in`cluster-spec.yaml` – the cpu workers are not present.