2022-11-01 Protein-ligand benchmarks meeting notes
Participants
@David Dotson
@Iván Pulido
@Irfan Alibay
@John Chodera
@Mike Henry
@Richard Gowers
@Jeffrey Wagner
@David W.H. Swenson
@Diego Nolasco (Deactivated)
Jenke Scheen
Goals
DD :
fah-alchemy
- current board statusrequesting 1 week extension for
fah-alchemy
MVP - decision required@David Dotson development effort now focused on
FahAlchemyAPIServer
,FahAlchemyClient
, andFahAlchemyComputeServer
Task queue system mostly implemented; working on reference implementation for
FahAlchemyComputeService
that consumes tasks, returnsProtocolDAGResult
swill build on reference implementation for F@H-facing production service
Building out test suite for
fah-alchemy
using
openfe-benchmark
networks while we work towardprotein-ligand-benchmark
0.3.0need mappings; what is OpenFE using for test networks with mappings?
Uncovered subtle bug in
py2neo
that was blocking effective use of test suite; seemingly random failures in roundtripping objects; will consider droppingpy2neo
for official python driver in the future, sincepy2neo
appears nearly unmaintained now
IA :
protein-ligand-benchmark
: blockers and prioritiesIP : Nonequilibrium Cycling Protocol (
perses
#1066) update:MH :
ProtocolSettings
taxonomy (gufe
#37) update:
Discussion topics
Item | Notes |
---|---|
DD : |
|
IA : |
|
IP : Nonequilibrium Cycling Protocol ( |
|
MH : |
|
|
|
Transcripts with no edits | F@H interface meeting - November 01
@00:01 - David Dotson I'm just going to use it in a loop for executing individual DAGs and they process pool executor for actually executing protocol units within the DAG. That will still run simulations locally for protocols that are just built to run simulations locally. But that will be the basis for the file to be a compute service. This is the one that interfaces with folding at home. So I'm trying to give us a step ladder to, you know, of complexity instead of trying to swallow all the complexity in one go because testing against folding at home is going to be a pain, getting all these little details right. So this allows me to iterate quickly on making sure that our end, which is everything above here is working before we start talking on this end, right, to the right.
@00:57 - John Chodera (he/him/his) Just to understand the future path here too, does the current... object model supports sort of weight per DAG task that we could eventually fiddle around within the future or is there there no provision for that at the moment?
@01:10 - David Dotson You said weights as in W-E-I-G-H-D.
@01:13 - John Chodera (he/him/his) Yes.
@01:14 - David Dotson Yes, so there are here. I'll show you that model right now. So let me look where this is most useful. So I showed you some of this yesterday or not yesterday sorry Thursday. Sorry the days are over the other year. So for folks who haven't seen this yet, that's fine. This is the task queue system. Let me try to police because around. So it looks confusing and I can explain it, but basically any given alchemical network has a single task queue and any given task corresponding to a transformation can be a member of one or more queues. That's like perfect. allowed. Task queues have a concept of a weight. So here if I click on this guy, you can see this guy has a weight of 0.5. That allows you to basically say, you know, if a weight is higher on a given off chemical network, you're saying that selection should be that one should get selected more often to be drawn from by a compute service. Individual tasks get a priority. So that allows them to skip the line basically within their two. So even though this is a linked list, basically, that's how these are implemented in the graph model. Whenever the tasks are pulled and consumed, they get reordered based on their priority. And that is how it gets consumed by a compute service. So how this looks in the implementation here. So like I said, we grab task queues based on weights, we choose a task queue to draw from. And then we claim a single task. And that's got to be based on priorities. Well, so that's. That'll happens internally, so I'm grabbing that that that sorting happens server side or database side. Does that answer your question job?
@03:10 - John Chodera (he/him/his) So the weeding at the task queue level is a stochastic and the priority at the task level within a queue is deterministic. Right. Okay.
@03:20 - David Dotson Does that.
@03:22 - John Chodera (he/him/his) It might be.
@03:24 - David Dotson Does that mean your major did you have something else in mind?
@03:29 - John Chodera (he/him/his) I think it'll work. It's just that we might want to pick one strategy and stick with it for for future. But I think this will totally be fine.
@03:35 - David Dotson What we could do is I mean, basically, it all depends on your usage right now by default, if you just create task queues, then they all have the same weight. And so they get like randomly selected. And then priority set the same for all tasks. So first in for stout.
@03:52 - John Chodera (he/him/his) There might be advantage to having the priority be interpreted as a weight instead of a priority in the future, just because of the way that adaptive algorithms will work.
@04:01 - David Dotson Oh, wait, hold up for the individual tasks.
@04:03 - John Chodera (he/him/his) Yeah.
@04:04 - David Dotson Oh, I see. Okay. Because it went where that is random selection based on weight as well. It's not a.
@04:12 - John Chodera (he/him/his) Exactly. I'm just thinking ahead to the, you know, the, if you're thinking about, as you do analyses on these, you can assign a weight that has to do with the difficulty of a transformation. And you don't want to completely exclude transformations. You just want to down weight how much you know I separate you out with it.
@04:27 - David Dotson So your weight, I see your weight is based on, okay, got it, which wouldn't be encoded very well with just a strict priority.
@04:37 - John Chodera (he/him/his) It would be very different and it would be also very difficult to model. So that's just the thinking there in terms of that. I mean, I can see the utility of the priority, but I think we had been thinking of, or I've been thinking about weight based allocation of effort more than that, but totally fine for now.
@04:53 - David Dotson Okay, let me get that down. Thank you for that. And I think we. It might also be possible to support both. I hesitate to say that, but our tasks can either have a way, can have weights and priorities, if all priorities are the same, and it doesn't do any reordering, but then weights, it's like priorities are.
@05:15 - John Chodera (he/him/his) Yeah, that would work great.
@05:17 - David Dotson Yeah, and then weights are handled a second. So it's kind of like weights then change the game. Primary and secondary keys, yeah. Any other questions for me? I know I'm kind of throwing all of this at folks. I just want to give you a penny of a portrait of current state. Okay, so I'm working on the API points.
@05:47 - Iván Pulido Sorry, sorry. I was muted. I think this was a question last week, do we expect this to work also like in HPC system or systems or this. is he's not expected to work in them?
@06:03 - David Dotson So the synchronous compute service would work fine on a VHPC so that you could submit the ideas that if you could submit one of these, this will be exposed via CLI as well. So I'm also putting that entry plan to the file company CLI. So you could spend one of these up as a job on HPC and then this would consume from the compute API. So if you can make a talk to, as long as that compute API is exposed to the internet, which we eventually want to do, we probably won't do that for our first deployment, but we do want it to support that. Then yeah, you could drop one of these onto an HPC queue. It would talk out to the server, it's pointing to pull in tasks, execute them locally and then ship back results. So we are trying to design this to be horizontally scalable. So that means you could spend on multiple. So I know in this. diagram, this diagram, you see that there's only like a single compute service here, you could create multiples and they would point to this API. So this API is should service many of these things, and they could be run on HPC. Does that answer your question, Avon?
@07:21 - Iván Pulido Yeah, thank you.
@07:24 - David Dotson Yeah, it's a similar architecture to QCDR, or QCFraction. Not that maybe that doesn't mean anything to you.
@07:30 - John Chodera (he/him/his) Probably also offers us a pressure valve for further testing, where we can, if we just are able to run a little bit on my lap, then we could do that for a little while until we get up to pulling it home scale.
@07:41 - David Dotson Exactly, yeah. So, so we'd be able to use for Avon, for example, the protocol you're literally working on right now, which is Percy's protocol, like that, that protocol is designed to run locally, right? It's designed to say you're planning to run open and down on the host that's executing like this, this protocol. So running one of these non-foss services would be appropriate for that on HPC system.
@08:12 - Iván Pulido Yeah, that's what I had in mind. Thank you.
@08:15 - David Dotson Cool. Yeah, so my first task is to get the synchronous picture working because that's the one I find the test suite, because it's a little easier to test. And then the async version which makes use of a vent loop makes process pool, and that does a bit more concurrency and parallelism. And we'll work ourselves our way up to the fully home version of that which is functionally the same but then needs to talk also to the fully home work server API. So it has an additional thing system has to talk to. So this is all basis for me asking for a one week extension. I'm going to be the pressure high and I like to remain in dev mode. I see. as much as possible. I'll continue to work with Yvonne, Mike and others on this. David, I know I haven't tapped you too much, but now that I'm in compute mode, I think I will. I've been trying to keep you out of the Neo4j land. So, yeah, I'd like a, I basically need a decision on this.
@09:22 - Jeffrey Wagner Yeah, I think if we make it a one-week extension just every week, the first item is extending it. I think it will over time lose its sense of urgency. So, I might say like two week and that way the frequency at which we revisited it is less likely to become a rubber stamp and we'll have to look at, you know, can we cut things and actually get stuff done? So, I would push for a two-week deadline.
@09:51 - David Dotson Okay, this is for the approvers. I'm requesting a two-week deadline. Do I get an or two-week extension? Do I get it?
@10:04 - Jeffrey Wagner Thumbs up from Richard. Thumbs up from John. Thumbs up from me.
@10:09 - David Dotson The goal is your opportunity to drag me over the cold, so feel free.
@10:22 - John Chodera (he/him/his) David, do you have an example of the input files that you're feeding it for testing right now that we could use to make sure that we have a, like, a little bit of an actual workload ready for you?
@10:33 - David Dotson So what I'm using is my test suite or in the file, let me test suite right now is open a fee benchmark networks. And this is while we're working toward protein ligand benchmark. So the way that these work, I can just give you an idea here. Is Yinka on the call?
@10:55 - John Chodera (he/him/his) I think he's at Diamond today, so he's not able to turn it. So if you can just point us to the input, it's a bef- file that basically takes the data in the benchmark format and translates it into the workload. And we can probably work on that over the next week or two.
@11:09 - David Dotson Yeah, one second one. Oh, I know. Sorry.
@11:18 - John Chodera (he/him/his) And this is good, though, because it means that you're already set to take the open force field benchmarking tasks up. And presumably the OpenFE as well.
@11:39 - Jeffrey Wagner I think that will require the completion of the protein, ligand benchmarks. Yes. Yeah.
@11:46 - David Dotson Yes.
@11:47 - Jeffrey Wagner And so it's kind of the test network that David's using may be different from that, right?
@11:51 - David Dotson Yeah, so this is tick two. It's based on your font. You can correct me if I'm wrong here. But this is an OpenFE benchmark. So that was basically all. It's like a fork or a very small fork of protein ligand benchmark just to open if he's using the basis for their tests.
@12:06 - John Chodera (he/him/his) Is it in the new format or the old format?
@12:09 - Irfan Alibay So this is the halfway points. So when Melissa did them originally using the QA rather than the this CLI for straight up for my straw.
@12:24 - John Chodera (he/him/his) But is the file format the same? Or is it different from what we will ultimately end up with?
@12:32 - Irfan Alibay This should be the same file format. The coordinates might be slightly different if the read up and scale I was in terms of results. I can probably sense. Yeah, so we know at least for the take two we ran the benchmark. And it was reasonably similar to the previous results on David home. So let's work with you. We're sure for that that works. So it's systems like P for T have changed a lot. We're currently investigating if that's. on our ramble somewhere else.
@13:03 - John Chodera (he/him/his) But it sounds like this, you know, the files will change, but the file format won't change. So we don't need a lot more coding to be able to address the new benchmark.
@13:14 - Iván Pulido I tried running the, that was after the CLI, when Melissa shared the CLI results and how to run them. I tried running the TIC2 and I got really like different results compared to the previous results we had. I haven't looked in detail why is that the case, but I think it might be related because the edges changed. So, but I have to look into that. I don't know if you've seen similar things, you're fine.
@13:47 - Irfan Alibay Oh yeah, so I should mention ahead, the OpenFV benchmark edges is a minimalist planning graph. There was a rebuilt network based on minimalist planning graph for TIC2. I don't remember if there were any liggins that were dropped from the the edges though. So if we should revisit that because if I don't remember.
@14:07 - Iván Pulido They were there were like 20 and now and then we ended up with 18. I think so like too. So it's not too bad but they were at least one or two. Yeah.
@14:20 - Irfan Alibay Well, yeah, we should. Sorry.
@14:23 - John Chodera (he/him/his) Just to get back to the original topic that we can close this up and then move on to the benchmark that the structure of the files has not changed since since David's been building this using this example, right?
@14:38 - Irfan Alibay They should not. They should be. They'll be the same PDB time and SDF.
@14:43 - John Chodera (he/him/his) Right. So we should be able to feed more benchmarks through it in the final form once they're once they're prepared. But we will have to structure our input files in a way that is suitable for this input. So if we could get get that path to that file example that we press. This is the benchmark that would be useful for us.
@15:03 - David Dotson Yeah, to be clear that the intention is to move the test suite for Filecat me over to protein ligand benchmark once we have zero to three out. So I do want to cut open that feed benchmark out. I know it was always intended as kind of a temporary solution anyway, I think, on the open FV side too. And so I'm just using it because it's the most available thing. I did want to ask your fun. I do want to add in the use of mappings because I currently don't have any added mappings in my test suite, but I need to make sure my test suite is operating on real-ish systems. Or what are you guys using currently for that?
@15:44 - Irfan Alibay So the mappings in OpenFP benchmarks right now are generated on the fly. So we have that quickly put together. It's not the best thing to worry about. We have this API for just essentially loading in the benchmarks and generating and mapping. So we'll feel free to based on a network we had previously defined. What I promised Savannah was going to do, and I told them to harass me, to what side tomorrow if I don't do it, is I'm going to drop mappings for everything in the protein link and benchmark as a YAML file with a dictionary. So hopefully we can just switch to that.
@16:27 - David Dotson Yeah, so I think we have that as a... I think it's issue number 69 on protein link and benchmark.
@16:39 - John Chodera (he/him/his) So it'll go into the edges file.
@16:42 - Irfan Alibay So we'll get rid of edges. We will create a 03 folder that will be like, it could be multiple edges file. So it'll be a file with the defined means of how we got the edges, if that makes sense. So... It might say, for example, openFE slash percysmapper slash.
@17:05 - John Chodera (he/him/his) Got it.
@17:06 - Irfan Alibay Yeah.
@17:07 - John Chodera (he/him/his) So it'll provide the edges and the atom mappings in their current STF indices. Yes.
@17:13 - Irfan Alibay Well, yes, hopefully. That's one thing I'm double checking aim. Working off the assumption that RDK retains the STF indices, which is, I believe, Richard has confirmed that.
@17:25 - John Chodera (he/him/his) We know open force field retains the atom indices from a loaded STF file. I believe that also happens with OpenEye and RDK as well. Yeah, OK.
@17:42 - Jeffrey Wagner Yeah, that's my understanding, as long as the hydrogens are explicit, if any of the three toolkits, I know open force field will, and I do suspect that OpenEye and RDK will as well.
@17:56 - Irfan Alibay Cool. I will double check this in case, but thanks for that. Make things alike.
@18:03 - David Dotson Okay, Perfaun, is there anything from this group for number 69?
@18:09 - Irfan Alibay No, I just need to get it done.
@18:14 - David Dotson Yeah, I'll continue moving forward with OpenFV benchmark for now, like in our test screen for FogME, but yeah, I'd like to switch that. This also gets us our benchmark systems that we need for OpenFV as well. Okay, I think that's all for me. Any additional questions on current board status, where we're at, where we're going? Okay, thank you. On that note, we can walk directly into Perch and Legan benchmarks. So, Perfaun, do you want to leave discussion here on where current blockers, what's current state?
@19:00 - Irfan Alibay Yeah, so essentially the main thing here is I've not had a chance to really work on this last week. The current state of things is so I need to redo these edges. The other thing is there was this paper from Schrodinger, which I put the link in. There was this paper for Schrodinger we discussed in past that had quite a big supplementary information. I don't know if this will, this might not go for LME funding actually.
@19:35 - David Dotson I think it works.
@19:36 - Irfan Alibay Okay.
@19:38 - David Dotson Okay.
@19:39 - Jeffrey Wagner For me. Okay.
@19:43 - Irfan Alibay Yeah, and so essentially in this literature information, they define all the systems and what kind of remedial actions they had to take for everything. So I'm just currently creating a full list of that so that we can check prior to release that we're not. extremely often something, you know, that they haven't found that we really should be approaching these residues and we haven't done it. So hopefully that's, I don't know if we want to meet that as a marker like something we want to do definitely for 0.3, if we want to move that to 0.4, I would think it would be best practice to at least check how far off we are compared to them.
@20:24 - David Dotson So hold on, can you clarify for me the question?
@20:30 - John Chodera (he/him/his) So as I understand it, there's a recent trading paper about revisiting the benchmark set that just came out in preprint form from Greg Ross et al. recently and we should make sure that the lesson, that the critical lessons from that paper are represented or reflected in the latest version as sort of a sanity check. Is that your suggestion print out and then in future we might be able to incorporate more insights from that paper into a revised release?
@20:59 - Irfan Alibay Yes. Exactly. I think specifically the critical things that they might have outlined, for example, any specific presentation state, for example, that if we deviate, at least we have a record of the fact that we have done that.
@21:17 - David Dotson Cool. So can you send me, can you drop that link into Slack, just because I will not drop it?
@21:28 - Irfan Alibay And then what we can do is we can do we have an issue that's pointing to I'm currently drafting one.
@21:34 - David Dotson Oh, thank you so much. No, excellent. That's a lot of one. I understand. Yeah, no, I appreciate you chasing that down for us. So we can make that the basis of potentially we can we can pull that into zero dot four milestone, but potentially zero dot five while we want to move faster zero dot four.
@21:55 - John Chodera (he/him/his) Yeah, I think we want to only incorporate any sort of critical fix. into this release and things that require like a different strategy for attaching something, which might also require us to update the best practices in manuscript. We should leave for the next release.
@22:13 - David Dotson Okay. Yeah, you're fine. If you want to go ahead and slap the zero to four milestone on the issue once you've created it, that way we can make sure it's in there. Okay, thanks.
@22:28 - Irfan Alibay Thank you for that.
@22:29 - Jeffrey Wagner Yeah, and I agree. I agree with David that this would be handy, but we should make sure that it doesn't get in the way of the zero three release.
@22:46 - David Dotson I know we have some open PRs. I think some from Melissa or any of these, are they blocked? I think this one's a work on progress, but it looks like maybe two might be close. So, so I double checked all the ligands I find.
@23:01 - Irfan Alibay that I think it's just a case of, I need to, oh yes, there is the gamma files for the ligands to be changed because there's new ligands here. So that's the to do for that.
@23:16 - David Dotson Okay, is this on, is this in Melissa's card at the moment?
@23:22 - Irfan Alibay No, that I think outside on my call or anyone else that wants to take in on.
@23:26 - David Dotson Okay, is this something you can do?
@23:31 - Irfan Alibay I can join. This is, I, yeah, I'll give it a go. And then I think if by the end of the week, there's no action on this, I will seek to handle it with something.
@23:45 - David Dotson Okay, is that the only thing that's blocking merge on it?
@23:48 - Irfan Alibay I mean, I believe that's it.
@23:50 - David Dotson Yes. Okay. Okay. I'm sure I'm curious now where the CI failures.
@24:03 - Jeffrey Wagner Sorry, what was the plan of action here? So something's causing CI to fail or fan, you're going to look at it, but if it's not done by the end of the week. Or wait, so there's a Michael who's going, sorry, I'm just trying to catch up the notes. Could we just repeat that last part?
@24:26 - Irfan Alibay Yes, so the, the, the action here is to do some yaml files in need of dating. So that's, I will take that on. But if, if I don't manage to it by the end of the week, I will seek to hand this over to someone else.
@24:45 - David Dotson Okay. You can, you can, I think I might be able to be only backstop for you here. So yeah, ping me if, if you need my attention on it instead.
@25:00 - Irfan Alibay Okay, thanks.
@25:03 - Jeffrey Wagner Thank you.
@25:06 - David Dotson Anything else on the protein, the benchmark front. I will add one pair that's open on my plate is. I'm currently holding off on. I won't merge this until after. I think it's pretty much number 82. You're fine with the question. I think we wanted to get. Number 68 resolved. And probably number 81 before we do any sort of. Restructure is that the case, sir.
@25:50 - Irfan Alibay So 81 should be fine with 82. 68 will need to be done once 82 gets much.
@25:59 - David Dotson Okay. Do you have any time to put any attention on number 68?
@26:05 - Irfan Alibay Yeah, that should be fine. That's not too much work.
@26:09 - David Dotson Okay. Thank you for that. Yeah, I know a lot of this is on your plate. I appreciate your efforts on this. Okay. We'll move on.
@26:34 - Iván Pulido Yeah. Well, we met last week. We and I raised the question about the mappings that you remember that. What happens if the engines and the. For example, in this case, versus needs to change the mappings because there might be some. Bonds that become the change from constraint to unconstraint. print because of the changes. I think we agreed on for now that we make versus do this and log it somehow. But I still think there's an open question on how to deal with this correctly from the goofy side. Other than that, I'm working on this on how to handle this correctly. Writing the units properly because I misunderstood what the gather unit meant and I wrote things the wrong way. But I now have the proper idea.
@27:44 - David Dotson OK, well, are you still cleared on me today?
@27:51 - Iván Pulido I think if there's room for talking about that on Friday's meetings, that would be best for me. OK.
@27:59 - David Dotson Yep, yeah, we should. We can I think Richard is the room on the open a feed power Friday.
@28:07 - Richard Gowers I think we've got something else planned for Friday already.
@28:13 - Iván Pulido Okay then I'll because I won't have any updates today so I'll try scheduling the meeting later in the week.
@28:21 - David Dotson I can also meet with you either tomorrow morning or Thursday morning. Okay or even in afternoon. It's up to you.
@28:31 - John Chodera (he/him/his) There was one more issue that came up when Ivan and I were talking about how we consume the add or generate atom mappings because the in feeding feeding to the protocols. The current approach just provides topology information. Is that correct as input to the thing that generates the atom mapping? Is that right? The current API. But no information about the system itself, the parameters. Right?
@29:01 - Richard Gowers Yes, it's just a single small molecularness confirmation.
@29:04 - John Chodera (he/him/his) Yeah. So, um, or perseam presumably for the future for protein mutations, but the difficulty here is that we have a, um, a bug we're addressing right now at perseas where carbon hydrogen, which is constrained in length maps to a carbon in order for carbon.
@29:27 - David Dotson We're getting some background noise. Sorry.
@29:35 - John Chodera (he/him/his) Yeah. So, um, I'm just wondering what the current strategy is for how codes would deal with either mapping of constraints, constraint bonds of different length. which shouldn't be mapped. They have to be demapped or mapping of a constrained bond to a non-constrained bond. This is something where we have a separate stage that demaps these even if they were said to be mapped. And then the calculation just proceeds without that mapping. But then the information about the mapping is incorrect. But we couldn't figure out another way to do it. So how do you folks want to deal with that at the API level?
@30:27 - Richard Gowers Yeah, that's a trick you want. I think on Friday we talked about that we could have the, yeah, the protocols actually change the mapping and then log that it's changed the mapping route, which is the best efforts to follow the mapping. We weren't completely happy with that because that's a bit sort of like the protocol's lying to you. Like you'd like to do one thing that does another, but it's sort of close to what you want to do. The other thing that I think you're hinting at is that maybe just passing in a subsection of the system isn't quite enough that maybe we have to think of atom mappings as sort of consuming the entire system. So maybe... both the protein and the small molecule, to sort of see the whole picture and understanding it's not the mapping. That might be necessary, I guess, which would then obviously break the current API you're right. Maybe that's necessary.
@31:16 - John Chodera (he/him/his) So I just wanted to get it on your radar because it's something where we'll have to internally D-map after the parameters are assigned at that point. But if there's a better way to do it, it might be good to change the API sooner at them later.
@31:30 - Richard Gowers Yeah, it's tricky. It's also going to require vision of the full-scale too, right?
@31:35 - John Chodera (he/him/his) And so that's the critical information, right? Like the protocol level settings about are you constraining bonds to hydrogen and what are the actual assigned equilibrium bond lengths? Those are the two pieces of information that are needed. I believe both of those are in the same protocol level that you, or I forget what your protocol settings that specify the force field itself, right?
@32:01 - Richard Gowers Yeah, because they they fact the potential that you compute so they should be in the same force field Protocol level Yeah, we hadn't thought of including force fields yet when we're thinking about math things But it sounds like it might be necessary or might have to be the protocol of a bit of authority to change their input a bit I didn't want seems like a fine solution.
@32:22 - David Swenson So we just want to yeah, I guess make sure we address it Now yeah, thanks I'll just reiterate the point that I made in the other meeting we have on this that if a protocol changes it It's mapping that I think that we just need in the API a standard that the Actually use mapping is reported if you want to do data mining later You want a single location no matter what protocol it is to find the real truth
@33:01 - Iván Pulido I think you're muted.
@33:05 - David Dotson Sorry. I hard for me to do it. So, John, but of a clarification question. So. So our current structure for all of this is that. A transformation has chemical system on either side. Also has a protocol hanging off it and has a mapping as well. And then these get fed in. To protocols create method. To then generate a DAG for executable work. Are you suggesting that by the time we're at this position, we really don't know what this is because this has to be determined later. Because it needs force field information. It needs some other pieces of information that aren't. Represented here.
@33:51 - John Chodera (he/him/his) Yeah, we can either take that as a suggestion and that might be changed later. Or we could take it as a ground truth, but we need to modify the upstream API that generates it. include more information. My guess is that it will be more efficient if we just allow it to be changed later as long as we do it. David Swenson suggests which is to capture exactly what was computed in the output.
@34:17 - David Dotson Yeah, you said so it might be permissible to say that the mapping that you're fed is the initial guess or initial suggestion. Is that generally something that makes sense? Or from your experience is having that initial guess. I think so.
@34:38 - John Chodera (he/him/his) That means it's meaningful. It also allows us to have more flexibility in terms of pairing atom mappers with codes. Codes will not be able to execute every atom mapping. So you're either just going to get a lot of can't do this. Sorry, I'm giving up.
@34:53 - David Dotson Or you'll ask the protocols to say, do your best effort at what you can do that's related to this. So it's a bit of a question of about the strategy of API design. I see, because one option is to do basically this where protocols don't handle mappings internally. And that's it. If this is generally a problem where it's impossible to say a a priori on network creation, here's what your mappings should be.
@35:25 - John Chodera (he/him/his) There's significant downsides to that, though, too, because one of the things that the OpenFE folks would like to do, presumably, is to explore different strategies for atom mappings that may have different different errors, different utilities for busness. Similarly with us, we would like to be able to exploit better atom mappings by starting with maybe three different mappings between compounds and then culling the ones that don't provide the best reduction in variance with time. So I think it would be significant drawback to just get rid of the atom mapping entirely.
@35:55 - David Dotson Okay, so having it as an initial input, even if it's just used as a suggestion or like an initial... just from guests or something to what your protocol does with it, like that is already valuable. It's still valuable to keep the API kind of where it is.
@36:08 - John Chodera (he/him/his) As long as we document our decision here about what it is the atom mapping needs. It's a strong suggestion, but that codes are allowed to deviate from it. But then we also need to capture as part of the output what atom mapping was actually used.
@36:23 - David Dotson Okay, yeah, I agree with this suggestion. So we don't have it documented yet, but I think open a few. You guys are on board with that approach too, right?
@36:47 - John Chodera (he/him/his) I'd like to drive the choice here of what ends up happening because they've been responsible for all the other design. I'm just providing some feedback about limitations.
@36:55 - Iván Pulido So one of the limitations that we did. discuss or where race last Friday was that the goofy objects are immutable. So if we want to report the actual mapping that we use, I mean, we can do it in many ways, but that doesn't make sense that we can change this attribute of these goofy objects in place or not.
@37:23 - David Swenson Maybe you don't change it. You just create a new one, but that's easy because all these things are just pointers. You're not very, you know, it's just a pointer to two chemical systems and then the different dictionary. You need to create the new dictionary for mapping anyway.
@37:37 - Iván Pulido OK, OK, yeah, makes sense.
@37:40 - David Dotson Yeah, and that's that's a little bit of on because you don't want to then. I think as as John suggested, right? One atom mapping for one pair of protocols for one protocol between two chemical systems might be appropriate, but not appropriate for a different protocol. And the model is that you could connect any two chemical systems with as many edges as you want. And those might be doing different protocols, right? So you wouldn't really want one mutating what's there because then any of the others can also mutate it. It becomes a mess, right? Well, hold on, I should say, nevermind that doesn't. The mutation could work, but it breaks the tokenization model and everything else. So it breaks a lot of things. If we start doing mutable nests on these things, the better approaches I just want some suggestion.
@38:32 - David Swenson Just as a side note to both Dotson and Rhetor, because I mentioned this, you guys get, but we have a thing in OPS that's a copy with replacement, which I'll just put that on goofy tokenizable, which is how you mutate an object essentially. Because we have this two-dict from-dict cycle. So all you gotta do is replace one of the things in the from-dict and you have a copy with a replacement. It's really, the one-liner, it's really easy, but it's something I'm gonna add to goofy tokenizable.
@39:00 - John Chodera (he/him/his) Sounds super convenient. Yeah.
@39:03 - Iván Pulido But then we need the results part to deal with, yeah, with this new result that there is a new goofy object that has the actual mapping that was used, right?
@39:18 - John Chodera (he/him/his) Does the result contain all the information in the create that's passed to create? Because if so, then you just replace the atom mapping with the new version you actually used, right?
@39:31 - David Swenson I think we're discussing this on Friday, and it did not at that point have a pointer back to what would really be the transformation, which is the combination of protocol mapping and the systems. But I think that's something that can be added as sort of the effective transformation that was actually used.
@39:50 - John Chodera (he/him/his) Yeah, it seems like it would be useful to have it contain the transformation information together to satisfy your request, David, that we keep all that information as the... Ground truth.
@40:08 - David Dotson Yeah, I envision this. The modified mapping or the mapping that was actually used by the protocol going into the protocol to have resolved that comes out the other end so it should should be in there. I think it's fun. Some you. Suggest that we should. Start to develop the. Like a. Give that object a bit more structure. I don't know if that's still your opinion.
@40:37 - David Swenson In a couple of ways, I would also point out it's really nice to hear to probably have the entire transformation because that also gives protocol some other flexibility if you want to change the molecules if you want to change coordinates. If you're jiggling things around like that, you have the flexibility to do that now, and it saves what was actually then do real starting point.
@41:02 - David Dotson Okay. Yeah, I think for now, if I'm when we meet, or I'll work with you on how we where we put this modified mapping just make sure we preserve it as an artifact of execution, and then we will probably iterate on how we store that generally for product calls because I think it's going to be a general, fairly general thing, or at least we have best practices in the absence of an API.
@41:32 - Iván Pulido Okay, that works.
@41:34 - David Dotson Yep. Anything else you wanted to hit on.
@41:40 - Iván Pulido No, I'm good.
@41:44 - David Dotson Please let me know what your availability is for Wednesday or Thursday. I'd like to meet with you then because I also want to make sure we're going to see the first is protocol is something that can translate pretty easily into the file from your version of it, because I'll need to. ideally do a lightest touch on that to make it a far interfacing critical. Okay. Last item, then, Mike, do you want to talk about critical settings? I know you've got a PR that I think it just got a review from Richard in the last couple of days.
@42:20 - Mike Henry (he/him) Yeah. It's got a couple of reviews on it now. And I think there's just like one comment thread that's left to address. So, um, this should be merged pretty soon.
@42:31 - David Dotson Excellent. Yeah. I think great work on this. Can we, yeah, if we can get it merged, then we can, we can iterate further on it.
@42:39 - Mike Henry (he/him) So I, yeah, exactly. And that's, I think if you refresh that page, you'll see my little comment, which was basically that like, uh, proposal of why don't we just do this thing now and take an iteration on it. Um, so once that gets some feedback from people in the, the GitHub thread, then we'll take that action.
@43:02 - David Dotson Okay, cool. Thank you for all this detail. Do is there anything you want from this group?
@43:08 - Mike Henry (he/him) Nope, I'm good. I'm good. I just want to get this iterated. I think at this point now, we've kind of like the we've done enough designing and I just want to see how it gets used so we can make changes on it because I'm sure that this will not be done.
@43:26 - David Dotson Perfect. Richard, are you been, I think you've been playing with it. Do you have any feedback for Mike here?
@43:33 - Richard Gowers No, I've been using it in production or as close as we get to production. But now we've released like an hour ago, we released what we currently have, we can then open up the end production, sort of give it a well. So I think Mike's right that we can kind of try and see where it breaks.
@43:50 - David Dotson Excellent. And then if on the same same question, I think we want to start plugging the settings once we get this merge will want to. Plug into the settings. into the various points you've marked in your protocol and persist?
@44:05 - Iván Pulido Yep. Yeah. The ones that I need are marked. I need to check if they are actually fit what we have in the object. But I have this feeling it will.
@44:17 - David Dotson OK. Any additional questions for Mike? OK, if not, thank you. That's it for the agenda. Are there any other topics or questions folks when we address? We've got about 10 minutes left.
@44:38 - John Chodera (he/him/his) So I just wanted to point out to Yankah, the file that you reformat the benchmark into the input for a DAG. You and I should look at this at the Thursday meeting or beyond about how we're going to set up the first sets of calculations for ASAP using this strategy. Because we'll have to figure it out. how to set up our pipelines to do this.
@45:06 - David Dotson So John, maybe that's best done in a working session.
@45:08 - John Chodera (he/him/his) You mean, yeah, perfect. Yeah, and we could do it next week when he's back because we have an extra week to do this.
@45:15 - Iván Pulido Don't get you guys muted. Wait, wait.
@45:22 - David Dotson Oh, you and good. Were you going to say something?
@45:25 - Jenke Scheen I was trying to say something. Sorry. I have a very alternative set up.
@45:28 - David Dotson For is everything next week.
@45:32 - John Chodera (he/him/his) Next week would be fine because we have two weeks.
@45:35 - Jenke Scheen OK, yeah. I'll OK. Let's discuss on this Thursday and then we'll see. Great.
@45:43 - David Dotson Thanks. OK. Any other questions or comments? OK, thanks, everyone. Really appreciate all the feedback and input on this. We'll give you back your 10 minutes. and we'll see you around.
@46:03 - John Chodera (he/him/his) Thanks.
@46:04 - Iván Pulido Thank you.
@46:05 - Irfan Alibay Thanks.
@46:06 - Jeffrey Wagner Thank you very much. See you.
@46:17 - David Dotson See you. Thank you. |