Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

\uD83D\uDDD3Date

\uD83D\uDC65Participants

\uD83E\uDD45Goals

  • Updates from MolSSI

  • New submissions

  • Throughput status

    • Openff dipeptides torsiondrives v1.1: 4/5 TD complete, 17 opts from last weeks

    • Openff dipeptides torsiondrives v2.0: 1/26 TD complete, 50852 opts from last week

    • Spice Pubchem set1: 43% to 57% (17165 calcs)

    • Spice Dipeptides single points: 58% to 60% (801 calcs)

  • User questions/issues

  • Science support needs

  • Infrastructure needs/advances

\uD83D\uDDE3Discussion topics

Item

Notes

General updates

  • PB – BP is leading a workshop today and can’t attend

  • DD – Storage use if growing over 30 GB/day. This will exhaust QCA’s disk space in about 2 weeks. So we’ve shut off the calculations on SPICE. BP also saw some other bulky calculations coming in, and ti looks like it was PR 259 (SPICE dipeptides dataset), which I hadn’t seen before.

    • DD – I communicated this bottleneck to PE and JC, and they asked if we could still continue on DES370K.

  • DD – We’re still prioritizing OFF sets, but this is just a question of whether we can give PE and JC some compute on DES370K.

  • DD - No managers on PRP, still prioritizing OpenFF jobs on other resources

  • DD & PB - We should shut off SPICE pubchem set 1

  • DD - SPICE dipeptide optimizations do not store wavefunctions, so that dataset should not impact storage much


  • CC – The last dipeptide torsiondrive in the 1.1 set is hanging, I think because we prioritized the dipeptide torsiondrive 2.0 set

    • PB – I’ve updated priority to ensure that 1.1 gets finished.

Infrastructure needs/advances

  • DD:

    • cutting QCEngine release today; this addresses pycpuinfo issue we've observed on PRP

    • will merge QCFractal#700; addresses cleanup issue on e.g. Lilac and elsewhere

    • will observe if both compatible with QCFractal#701, which uses latest QCElemental, QCEngine; merge

    • cut release of QCFractal

    • update all deployed environments; deploy on all compute resources

  • PB – Can jobs report when they’re killed by the scheduler (in their qca-dataset-submission tracking info?)

    • DD – I don’t think so, this would cross a bunch of process boundaries, and to the manager it just looks like a keyboardinterrupt

  • JW – Details about pycpuinfo issue?

✅Action items

  •  

⤴Decisions

  • No labels

0 Comments

You are not logged in. Any changes you make will be marked as anonymous. You may want to Log In if you already have an account.