2023-06-13 QC Meeting Notes

Participants

  • @Joshua Horton

  • @David Dotson

  • Ben Pritchard

  • @James Eastwood

  • @Yuanqing Wang

  • @Matt Thompson

  • @Jeffrey Wagner

Goals

  • Updates on MolSSI QCFractal

    • status of new physical server?

    • plans for training on dataset submission, actioning compute?

    • how best to coordinate? Does it make sense for this biweekly call to be replaced by a working group facilitated by MolSSI?

    • individual instances – does an OpenFF-specific instance make sense?

  • Updates from OpenFF

    • status of OpenFF server; response on grant supplement?

      • from @David Mobley : likely answer is that it’s a “no” on the grant supplement

    • new personnel; best way to onboard

      • one begins in July, other in August

    • replacing meeting invite with a fresh one; currently two floating around

      • we will keep existing invite for now; await new working group call from Ben

    • openff-qcsubmit : must be refactored for new QCFractal

    • issues with pulling v1.1 of industry benchmark set from old QCFractal; unclear exactly why this is happening

  • User questions

    • existing dataset issues

    • new submission issues

Discussion topics

Notes

Notes

  • BP – Updates on MolSSI QCFractal

    • status of new physical server?

      • Installed and running - The new server is now the default server

      • both old and new instance are at https://api.qcarchive.molssi.org

        • if using old FractalClient, will hit old instance

        • if using new PortalClient, will hit new instance

      • DD – We’re currently not running any datasets on QCF. Would like to migrate to new instance. The new packages are released?

        • BP – They’re in an off-label package. I can merge the next branch, and make a release any time, and will need to put out some warnings in the readme.

      • JW – New server is both new hardware and new QCF/database version? And we should only write to the new one?

        • BP – Yes.

        • JW – Ok, in that case we should make sure to complete the qcsubmit migration.

      • BP – docs for QCFractal next are here: https://molssi.github.io/QCFractal/

      • BP – So new server is up and running, I’m doing some additional data migration in the background.

      • DD – BP, since the main server is migrated to the new version, could you release the next branch as the default conda package? And update the new docs to point toward current versions.

        • BP – Yup, can do.

    • plans for training on dataset submission, actioning compute?

      • BP – Haven’t thought about this. Was thinking about expanding demo - lots of jupyter notebooks so we can do async training. Right now they’re hosted on binder (qcademo.molssi.org)

        • MT – Unfortunately binder is going away.

        • BP – I can run a virtual machine with jupyterhub on the new server, but that may allow a bit of cross-contamination

        • JW – I think it’s compartmentalized, like each user is in a docker container. Though colab may do what you want.

        • BP – Just need qcportal installed, colab may work.

      • DD – As someone managing compute, what should I be aware of?

        • BP – Old compute manager still works. There’s also a new manager type. The docs aren’t done yet. The new manager works completely through parsl. John mentioned that there may be a problem with his managers because they don’t run in exclusive mode on a node. So far I’ve tested slurm and local process pool, but there are other options.

        • DD – Good “Compute managers and workers” docs could include “how do you set up parsl on your compute cluster?”. Could also include “how we’ve been using compute maangers”, which is “submit inidivual jobs to the compute manager with local executors”.

        • BP – Yup, and you can still do that with local executors.

        • DD – So if the docs include those two pathways that would cover most user needs. That would really help me try out parsl. Also DCole would benefit from this info since he’s looking at running his own.

        • BP – IS DCole part of OPenFF?

          • JW – He’s a collaborator but not part of the gov board.

      •  

      •  

    •  

    • individual instances – does an OpenFF-specific instance make sense?

      • BP – I think we’d like to do this. It would be natural to split out the openff torsiondrive sort of work, from the SPICE ML sort of work. Managing a single huge DB is harder than splitting in a way now that may prepare for an eventual split to another effort, especially from a maintenance point of view.

      • DD – Does it make sense to divide along “who is paying for the instance” as well?

      • BP – Yes, and being able to move that around is helpful. I’ve heard murmurs about a SPICE V2, and I’d start a new instance for that and migrate SPICE V1 over to that, and it would be our ML server.

      • DD – Yeah, that makes sense, and then the costs would be splittable out.

      • BP – Yeah, user management too.

      • DD – Does next have more granular user permissions?

      • BP – In the future maybe, but not yet. The next big non-openff ML dataset will go on a new instance.

    • how best to coordinate? Does it make sense for this biweekly call to be replaced by a working group facilitated by MolSSI?

      • BP – I think it makes sense for me to be the nexus of that, but I’ll need a little time.

      • JW – have two FF people QC focused coming online in 1-2 months

        • Lily will coordinate then, but she’s in Australia; timezone is challenging

        • CA person may be most relevant for QC person; this meeting time could continue to work

        • Are we proposing multiple meetings, 1 meeting?

        • (General) – We’ve got work to do before scheduling, so let’s not tackle this yet. We’ll revisit this at our next meeting in two weeks.

      • BP – I’ve been approached by two companies about SPICE v2, expecting some work on that coming down the pipe.

        • DD – Congratulations - It’s great to have use cases lined up

      •  

      •  

      •  

  • Updates from OpenFF

    • status of OpenFF server; response on grant supplement?

      • from @David Mobley : likely answer is that it’s a “no” on the grant supplement. So we should follow BP’s lead on the server partitioning and everything else.

    • new personnel; best way to onboard

      • one begins in July, other in August

      • DD – I’m happy to coordinate live training, but would love to use existing notebooks/docs.

      • BP – MolSSI has hired someone as a contractor to help with tutorial and videos. She’s coming online soon (july/aug). So that will help but maybe not on the timescale we need in the next few months.

    • replacing meeting invite with a fresh one; currently two floating around

      • DD – No action needed



  • User questions

    • existing dataset issues

    • new submission issues

Action items

Decisions