LW – Is it suitable for error tolerance for equilibration step of equilibrationlayer to just look at PE instead of total neergy?
DM – I generally look at density. Correlation times for energy can be misleadingly short.
LW – I should talk with MS group, I think they’d found the opposite.
DM – Maybe if your iterations are long enough the energie fluctuations are getting smoothed out
BS – Yeah, energy fluctuations can get misleadingly smoothed out by having big iterations.
LW – For each 200ps block, we run a check of whether it’s equilibrated or not. We use thepymbar detect_equilibration scheme
DM – My thinking that that’ll fail for short blocks because it’s too noisy, though I’ve seen it be well behaved for density.
LW – Maybe we extend from 200ps to 1ns?
BS – That’d be a good idea.
BS – I also generally do an NVT equil, then NPT.
LW – In evaluator NPT in the default for both equil stages.
BS – … If you’re taking starting states from a previous iteration of the FF, they should be relatively close, right?
LW – The starting states in the equilibrationlayer are directly from packmol. The starting states for PESL are in theory FF-agnostic.
BS – That makes sense. Just make sure not to change ex. sigma a lot and introduce clashes. So I think starting with the results of previous sims would be pretty good.
LW – My starting hypothesis is that the FF shouldn’t change too much. If that becomes a problem then we can think about functionality to use different equil’d boxes for different FFs.
BS – I also like the idea of looking at density in addition to energy.
LW – That’d be interesting, could work with infra team on this.
PB – Will you be using this protocol for organic solvents as well as water? Are you testing with any organic solvents?
LW – I’m testing on the properties that shirts group was having trouble with, which were mostly water. But the full test suite will have organic solvents as well.
PB – Worth considering whether defaults/behaviors will change for organic solvents.
Evaluator-on-NRP refresher
JW
JW – I forgot what I said infra team would do, could I get a refresher? I recall two items:
Running on DASK-Kubernetes
LW has a prototype of this
JW – Could I get a recap of existing work items/conversation on this?
LW – MT and I checked in on this late last year, can’t recall if there are concrete action items.
MT – not completely sure how to move forward. Could add tests, but they would be difficult to spin up and probably expensive to run. LW’s PR has some tests already. I think I need to decide whether to move forward with this.
JW – Sounds like this is on MT’s plate for now, even if it means “figuring out which questions to ask”
LW – Happy to walk through the branch now, it’s kinda prototype-y
MT – Would love a walkthrough. I think we did one earlier and I kinda followed, but a refresher would be great. One particularly ugly wart that you had pointed out that I agree is rough. But also open to more info on where you think the current code is prototype-y
Getting pre-equilibration changes into Evaluator
LW – the two items are independent
LW – branch of pre-equilibration workflow branches off k8s branch, but changes are independent
JW – sounds like still science to be done, so maybe we focus on k8s today
LW – sounds good
JW – on resources, a few thoughts
JW – This won’t affect anything scientifically but is obviously suboptimal
MT – Mostly agree. The ugliness is 70% form and not function - ie it doesn’t look pretty but it behaves well. Future improvements would be atomic changes (can be refactored independently, the complexity isn’t contagious). Didn’t really see other places where it could be improved. Also most of the new code is tested, looks good, adds a valuable feature, and - importantly - works in the wild. So beyond some minor details that I’ll bring up in a review this is probably good to merge+release.
LW – One thing is, when I actually use it, I have run_fit.py, which is quite finnicky. When I run it, I wonder how much of this could be put into Evaluator, though it wouldn’t all seem to fit. It basically runs through all the steps I put in the evaluator-on-kubernetes document.
MT – My intuition is to keep it outside evaluator for now, which is a weak preference.
JW – Similarly weak preference, also to keep it separate/outside evaluator repo. Maybe if this had been used for a yearish already and was in a stable/general state it could come in as a utility. But for now I think if it goes in version control it should be in something artifactual like a release repo.
LW – Mostly agree
MT – Hard to put a strict “year” timeline of this sort of this, and maybe should go somewhere like a “fitting tools' repo rather than evaluator repo itself.
JW – Yeah, maybe a separate repo, or a specific “cookbook” style entry for “running on NRP”
LW – Yeah, I don’t know how much this would generalize to kubernetes clusters outside NRP.
JW – In terms of how we can help improve quality of script, I think we could only superficially give advice on code unless we have a task to try running.
LW – Yeah, I think the major development of this script will come from having more users try it out. The technical/performance-sensitive changes are in Evaluator itself.
JW – Would having infra team take a look at bottlenecks/performance of the script be important/urgent to you?
LW – Low urgency, medium importance. Main thing is that we keep GPU utilization over 50% on NRP, which we currently do.
MT will review PR, structurally seems good but will ask for some docs improvement and other changes. Then will coordinate on release + deployment.
MT – Should pre-eq work be merged before I include the above work in a release?
LW – No, that will take a while longer, and there’s no dependence of the DASK stuff on pre-eq