Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Notes

  • user questions/issues/feature requests

    • MO : is there a way to get values from individual tasks

    • Code Block
      estimates = []
      tf = asc.get_transformation(tf_sk)
      for pdr in asc.get_transformation_results(tf_sk, return_protocoldagresults=True):
          pr = tf.gather([pdr])
          estimates.append(pr.get_estimate())
      Code Block
      estimates = []
      tf = asc.get_transformation(tf_sk)
      
      for task_sk in asc.get_transformation_tasks(tf_sk):
          pdrs = asc.get_task_results(task_sk)
          pr = tf.gather(pdrs)
          estimates.append(pr.get_estimate())
      • IA : in 0.14.0 there’s a way to get the individual estimates

      • JS : very keen on openfe 0.14.0

        • IA : might want to do 0.14.1

        • JC : what are we using for serialization?

          • MH : use pydantic data models, but also use custom hooks for objects that don’t have a native representation

          • JC : is there a binary representation for these?

          • MH : currently in the process of reworking some of the storage bits,

  • compute expansion

    • JC : may be useful to have an OMSF-wide allocation; since it now holds a large NSF grant, it can probably easily get one of these

      • JC : also haven’t reached out for a while to AWS or Google Cloud, but could probably ask for $40k at a time

        • would need to be ready to go; usually a time limit of about 6-12 months

        • may make sense to try getting things working with current resources

          • could make use of Moonshot Science paper publicity to rally support

        • DD : need one of the following to make good use of Spot for GPU compute:

          • extends support in protocols, allowing for chaining of short runs

          • partial execution of ProtocolDAGs in alchemiscale; possible but harder and will take more time to achieve

      • DD : having extends support in NonEquilibriumCyclingProtocol is something I’m planning to put effort on soon; needed by ASAP and for Folding@Home use

        • JC : is an extends-able RelativeHybridTopologyProtocol something Ivan and I can work on?

          • IA : moving RelativeHybridTopoologyProtocol out of openfe in the next couple weeks into feflow, so will be ready for refactor work after this


protein-ligand-benchmark:

  • JC : [presentation slides] on changing the strategy for protein-ligand-benchmark

    • https://docs.google.com/presentation/d/1yvHggJdbrTVlgYuZXlLusm3FhLuXHPLfedgo50OWM0k/edit#slide=id.p

    • JS : are you sure about wanting to use SpruceTK?

      • JC : yes, have a lot of control of the behavior; for the short term is the way to go

        • long term interested in replacing things with some diffusion model, but have to start somewhere

        • would like to move on from Schrodinger tools here

      • JS : are you planning to expand protonation states for all of these targets?

        • JC : if ligands have favorable protonation states, then have multiple states

          • won’t include these in core dataset

    • IA : are you targeting protein-ligand-benchmark 0.3.0?

      • JC : would be next major iteration, whatever we call that

    • IA : various groups want to get involved in PLB, including OpenEye

      • OE interested in membrane proteins; would this work for those?

      • JC : a lot of the systems from BindingDB are membrane protein systems

        • don’t have to bias away from membranes

      • JC : ideally would like to focus on where in the process we observe issues in inputs

        • can then flag these upstream so they can be fixed

  • IA : one of the issues we have is many folks have created their own forks of protein-ligand-benchmark

    • so fragmentation issue currently

    • JC : there is no way to really go backwards, and that’s okay

      • would be possible to pull previous versions, but no fixes backported; we move forward only

      • would also be positioned to run with Folding@Home in an automated fashion, could hook up an automated dashboard for this

    • DD : benchmarking which protocols, which FFs, etc?

      • JC : would use current best practices to first identify well-behaved targets

        • those well-behaved targets would function as a snapshot/version of the benchmark set

        • actors like OpenFF could then use individual benchmark set versions as the basis for their own benchmarks to resolve questions on FF improvement

        • others could use these set versions for their own method improvements

Action items

  •  David Dotson will update alchemiscale.org stack to use openfe 0.14.0, gufe 0.9.5

...