Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Participants

Goals

  • alchemiscale.org

    • user questions, issues, requests

    • compute resources status

    • current stack versions:

      • python 3.10

      • alchemiscale: 0.4.0

      • neo4j: 5.18

      • gufe: 0.9.5

      • openfe: 0.14.0

      • perses: protocol-neqcyc

      • openmmforcefields: 0.12.0

  • DD: alchemiscale release 0.5.0 imminent; milestone complete

    • includes:

      • openfe + gufe 1.0 compatibility

      • Folding@Home compute support

      • feflow inclusion, drop of perses

    • will be deployed on a new host, new database as api.alchemiscale.org with advance notice to users

      • current api.alchemiscale.org instance will be moved to api.legacy.alchemiscale.org, kept around for some time, but with no new compute provisioned

  • DD : will be working on testing MIG splitting on Lilac A100 after alchemiscale 0.5.0 deployed to alchemiscale.org

  • IA : feflow and settings changes that are backwards-incompatible

Discussion topics

Notes

  • alchemiscale.org

    • user questions, issues, requests

      • JS: Are we going to get non-eq-cycling on alchemiscale on soon?

      • DD: Yes, working on resolving feflow stuff so we can get a release cut, concern is on if we release too soon it could cause issues

      • JS: ETA?

      • DD: Should be days, better to make releases

      • JS: This PR adds the NEQ support in asapdiscovery

    • compute resources status

    • current stack versions:

      • python 3.10

      • alchemiscale: 0.4.0

      • neo4j: 5.18

      • gufe: 0.9.5

      • openfe: 0.14.0

        • DD: When we deploy 0.5.0, we will bump gufe and openfe to latest 1.x

      • perses: protocol-neqcyc

        • DD: This will be dropped in 0.5.0

      • openmmforcefields: 0.12.0

        • MH : working to re-add GAFF support to openmmforcefields; work happening here:

        • JW : can we make a new release of openmmforcefields (0.14.0) that only works with latest OpenMM, then release another that has MH’s patch that works with all OpenMM versions (0.15.0)?

        • MH : yes, I think that approach makes sense

        • MT: Will take responsibility for 0.14.0 release

          • JW : will want to pin to newest OpenMM 8.1.2 or newer in the conda-forge feedstock

  • DD: alchemiscale release 0.5.0 imminent; milestone complete

    • includes:

      • openfe + gufe 1.0 compatibility

      • Folding@Home compute support

      • feflow inclusion, drop of perses

        • DD: What python version do people want?

        • MH: Grab the higest version of python that resolves the env given our other pins

    • will be deployed on a new host, new database as api.alchemiscale.org with advance notice to users

      • current api.alchemiscale.org instance will be moved to api.legacy.alchemiscale.org, kept around for some time, but with no new compute provisioned

  • DD : will be working on testing MIG splitting on Lilac A100 after alchemiscale 0.5.0 deployed to alchemiscale.org

  • IA : feflow and settings changes that are backwards-incompatible

    • DD: We are meeting tomorow to talk about these changes

  • MT : all packages will soon be on pydantic v2

    • MT: Example of how v1 & v2 APIs CANNOT mix: https://github.com/openforcefield/openff-bespokefit/discussions/328#discussioncomment-8855550

    • DD: PR working on it , tests pass locally but on CI it just hangs and times out after 6 hours

    • JW: The PR is nice since it migrates to native v2, but you can install pydantic 2 and use the v1 api, which works fine as long as you don’t mix them

    • MT: The PR to using the v1 api when installing the pydantic 2 pacakge is an easy PR, mostly copy and paste different imports


Action items

  •  

Decisions

  • No labels

0 Comments

You are not logged in. Any changes you make will be marked as anonymous. You may want to Log In if you already have an account.