General updates | JW – For Rosemary charges, PIs are leaning toward a strategy of “graph net charges based on AM1, followed by SMARTS-based BCCs”, with “normal library charges” as a backup. Just mentioning it here so we make sure to bring it up in ff-release call. I’ll bring this up for discussion in the FF-release meeting, but in case I don’t, CC you may want to ask any PIs present about it. CC – Surprised this didn’t come up in roadmap planning meeting. SB – That seems right at a high level. Their thinking is that, even if we can’t get graph-based charges, we can have something that handles nonstandard AAs and post-translational modifications. Relies on getting graph charges that are consistent/high enough quality. PB – Plan for vsites? SB – My plan for that would be that the graph net would capture AM1 charges, and then training BCCs on top of that. So then training VSites would be like training BCCs. MT – Would this lead to a hard dep on pytorch? SB – In principle, yes. The goal would be to have this handled through DGL, so it wouldn’t be a hard dep on pytorch, could eventually use jax of tf. MT – Last I looked into this, DGL isn’t get adoption in all the ML packages (like jax). SB – There is some uncertainty, but even if we only support pytorch, there’s a simple CPU package that can get the job done. MT – I’d be concerned that users may just rely on librarycharges if they have trouble installing pytorch SB – The rosemary FF itself will specify only one charge method - Either librarycharges or graph net - So the FF will EITHER use GCNs and be able to handle PTMs, or it will use librarycharges and not be able to handle PTMs.
|
Individual updates | |