Meeting context | JW – I apologize, I was in a rush and didn’t communicate how this was all coming together. I’ll do better about this in the future. Original message from JW: Basically, my thinking is: Eventually OpenFE will want to say 'hey, our protocol gives the right answers and you should use it' In order to say that, OpenFE will need to have run a bunch of free energy calculations In order to run a bunch of free energy calculations, OpenFE will need lots of compute power This project can offer a bunch of compute power, but we want to de-risk the possibility that we'll provide it in an incompatible form for you
This is, of course, beneficial to us too, because benchmarking of OpenFE workflows is ALSO benchmarking of OpenFF force fields.
JW – I saw you discussing some deep internal architecture stuff on Slack, but there are some huge decisions that you should probably be involved in, like “what file formats should the inputs be in? PDB? prmtop?” DD – What are primary hesitations on being involved in this project on the approver level? RG – Reading through the doc, it looks like it’s being built specifically for F@H. But that’s useless for us. If this were going to be more general than F@H then we’d be interested. JW – My understanding is that you’d be allowed to submit calculations to F@H through this infrastructure RG – We don’t need additional compute, we have plenty for our own needs, and our USERS won’t be able to submit to F@H (since their stuff is proprietary/it’s unreasonable to open it to the public). DD – Some of these components could be reusable, but it is mostly specific to F@H. But we’ll also benefit from your feedback on this and an understanding of your preparation pipeline. RG – Agree. We don’t want to be formally involved unless we have something concrete we’ll gain. But we’re happy to advise and help coordinate. DD – Understood. I’d love to have my ear on the ground with you guys, and be involved in your coordination calls. RG – We’re aiming to have basic PDB + FF → OpenMM system released around July. I’d be happy to invite you to our coordination calls.
RG – Could you comment on how flexible the backend is going to be? The results store? Where will the inputs be stored? DD – A copy of the inputs (currently GROMACS or OpenMM) will be stored in the work units, but they’re going to be fairly F@H-specific. Initial submissions will go through a REST API. RG – Yeah, so I don’t think we’ll
JW – Does OpenFE have an estimate of their compute requirements for 2022? RG – It looks like about 1 GPU-day per edge. No precise estimate right now, but We have access to large academic GPU clusters in UK/EU, and don’t anticipate that we can come anywhere close to saturating them.
RG – So, I don’t think it’s in our interest to be approvers here
|