Skip to end of metadata
Go to start of metadata
You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
Version 1
Current »
Participants
Goals
Discussion topics
Item | Notes |
---|
Memory leak/zombie CPU bug | ZW – Hi, I wonder if there could be cases where bespoke-executor is kind of stuck and then ramp up the memory? I have a couple of jobs where I'm at the QC data generation stage. It seems that I have generate all the fragments from the CPU usage and there is no psi4 running. However, the openff-bespoke executor run using 8 cores and 242.6GB of RAM. The RAM usage keeps going up until it hits the RAM limit, then it begin to read heavily from the file system. This problem is not reproducible but happen quite often that it is quite annoying. Thank you. The command that I'm using is
openff-bespoke executor run --workflow-file no-fragments-workflow.json --directory bespoke-executor-${SLURM_JOB_NAME} --file lig_64.sdf --output lig_64.json --output-force-field lig_64.offxml --n-qc-compute-workers 6 --qc-compute-n-cores 8
|
MACE paper | DC – Released MACE paper, our contribution to NN parameters. JH – I have a PR to get this into QCengine. Currently has an academic license though. AIMNET is also looking good. DC – Right, industry folks would need to contact that group to get access.
|
Action items
Decisions
Add Comment