Iteration Retrospective 2025-04-21

Iteration Retrospective 2025-04-21

  • JW – Should we track Matt’s work on joint demo? I think we should since it’s work that OpenFF is contributing to other projects. For future collaborations I think we should be tracking it.

    • JE – Do any of these tickets require coordination with OpenFE?

    • MT – Not yet, I think they’re in good shape interop-wise, need to invest more energy into OpenFold and OpenADMET early on.

    • JE – Will these eventually go into a new repo and get pulled into zenhub?

    • MT – Yes

  • MT – When I see a PR into anything-goes in any of the columns that procedurally involve reviews, I get confused since the point of that repo is to skip the review process. It would help to have a brief comment somewhere about what’s needed to move those PRs forward (since just merging without looking at the files is within the practices of that repo).

    • JW – this came up because we are trying to use anything-goes as a low-effort place to drop the BeSMARTS work JM has done. This probably should get probably get a review from the team, so this is a different use case than what we normally expect in anything-goes

      • Option 1: We review these, and use the zenhub Review column like we would for other repos

      • Option 2: We merge without review, then open zenhub tickets reminding us to “review” the code.

    • MT – I feel like the current PRs into anything-goes aren’t quite in line with the intent, they’re somewhat large-ish projects.

      • Option 3: Put these notebooks somewhere else.

    • JW – Let’s go ahead with option 2

    • JE – I guess one extreme case is someone tracking all their work on anything-goes which would be a bit much, but in this case it seems appropriate.

    • JE – MT, does this resolve concerns?

      • MT – Not quite. Currently JW has a mega-PR that’s in anything goes that is a major value stream into the organization.

    • JE – This is a different piece of work?

    • (this is about the PTM protein loading flowchart)

    • JW – Sorry about that, I forgot that PR was still open. I do not intend to merge it. Closing it now.

  • JM – JW had sent a slack message - “Matt and I talked about our PR review process this morning, and we realized that two different people could give substantially different reviews for the same ticket. In the spirit of making review assignment more interchangeable, maybe we want to think about how we could make our PR reviews more standardized, and better set context and expectations.”

    • JM – I had left a review that was probably way too deep and the insight was buried. I would benefit from clearer expectations on reviews.

    • MT – I hope I didn’t give the impression that you (JM) are leaving superfluous review comments.

    • JW – When I wrote that, I was thinking of my own review comments on one of MT’s PRs.

    • MT – One example is style which will naturally vary by reviewer. But more importantly reviews of PRs much more than a few lines will require a lot of context/time. Overall it’d be good to agree on some standards for PR reviews, and identify what we want to value. Procedurally, the value-add is that we don’t run into situations where a reviewer is pulled away from other work for unnecessary amounts of time.

    • JE – Glad we’re having this discussion.

    • JW – low effort would be we jot down a few bullet points of what we think a good review should be, and iterate on them as we find defects

      • Another idea (higher effort) is we learn by doing. We each review the same PR, and see what we identify differently. We could use this same exercise for onboarding new team members.

    • JM – That could be an interesting exercise.

    • LW – One thing I like about the form of JOSS reviews is that they put on a checklist in the PR template, so one we know what we want that could be a good way to record the points/form.

    • MT – I was thinking close to the low effort option. The high-effort route would be interesting, but not sure how it’d come back to processes. So overall in favor of the first option with planned iterations. Would want to focus as much on what we DON’T include as what we DO include. So we start by doing it with low expectations, but plan to improve and systemetize it over many iterations.

    • JE – And it seems like iterating on a solution to the first option will open the way toward a more structured exercise if we go that route. Who should be involved?

      • JW + MT – At least us, maybe JM if time permits.

  • LW – Last week, I ran into an annoying bug with QCSubmit/QCPortal, and dumped it into the Zenhub channel, and MT and JW had it fixed when I woke up.

    • JW – This was actually a great experience to debug with MT

    • MT – Agree, this was a neat tour of a part of the stack I don’t see much.

    •  

    •  

  • SFE planning

    • Sage 2.0 = 350 systems in mnsol, 70 in freesolv

    • Done in triplicate. Each takes 6.5 GPU-hours.

    • @50 GPUs, this is ~1 week per force field (still using old image)

    • Submit proposal to NRP to get more GPUs?

      • JM will do this