Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

https://docs.google.com/document/d/1gTcfKDzia0uW8K2iPWwGnA_g-6mlms-4utdd-hFdrnU/edit?usp=sharing

SB - reasonable handle on what our workflow looks like; early stage, scripts okay.


HJ - should sit down and think about the schema first then consider automation. Still experimenting with schema; this may be blocking progress.


MT - bar for perfect would hamper progress; go for addressing current needs, and evolve with time. If we want to write code that depends on tests that rely on bond orders existing, have to go back and find where they were calculated, where they were not.


SB - shape of data changing depending on algorithm; hard to define input schema; output schemas easier to define. Defining data models is inherently difficult, especially when shape of the data is constantly changing. Probably where software scientists should step in and define versioned schemas at various points, while allowing the science to continue to have flexibility in bushwacking


JW - QCSubmit: would be very informative to have a walkthrough of what it does and does not do, how it does it, and go from there on what gaps need addressing

  • Are there areas of work culture we can improve?

    • Tone of communication?

      • We can frame things as a team effort whenever possible, not as challenges for an individual

      • We already do a good job of this, but avoiding blaming an individual, and instead thinking about how the process could be changed to prevent a problem in the future

    • Dealing with time zones?

    • Clearer priorities?

    • Communication with leaders?

      • When deadlines are approaching, have frequent check-ins to evaluate progress and reschedule deadline if needed

    • Other unnecessary stress?

      • Working together on big tasks creates a shared responsibility and makes hard decisions easier

      • Recognize that no (or very few) companies currently rely on our toolkit/FFs, so they’re not waiting on our deadlines

Notes

Virtual devs week organization

  • 15 minute break each hour

  • End early if possible

Day 1:

  • Round table updates

  • Feedback on development and work practices

  • Interest in physical developers week meetup

  • Namespace reorganization

  • Create subgroup for QC dataset organization

  • (maybe) Slack channel discussion

Day 2:

  • QCSubmit demo

  • (maybe) namespace stub proof-of-concept

Round table updates

Jaime –

  • From Spain. Master in bioinformatics, PhD in computational chem (metal ions in biology).

  • Worked a bit on automation in parameterizing/running metal-biological simulations.

  • Got interested in scientific pipelines.

  • Worked on porting AmberTools20 to conda-forge, other ugly infrastructure work

David Hahn --

  • From southwest Germany. Phd in molecular dynamics method development and applications.

  • Currently openFF postdoc at Janssen

  • Work on benchmarking FF parameters WRT P-L binding free energy, developing benchmarking dataset, free energy calculation workflow

Josh Horton --

  • From Northeast UK. Phd in bespoke FF parameterization. Now a postdoc with OFF working on bespoke pipeline

Josh Fass --

  • From Egypt. Senior PhD with Chodera. Working on pilot experiments for bayesian sampling in models where property calculations are cheap.

  • Working on different ways to explore atom type environments, to alleviate need for brute force sampling

Simon Boothroyd

  • From central UK. Working on OFF-evaluator, lots of infrastructure work. Migrated PropertyEstimator to OFF Evaluator. Significant amount of documentation written, examples on CoLab.

  • Created Nistdataselection – main repo for curating phys prop datasets

  • Last week, made a new repo that rethinks the nistdataselection structure, for a better-architected way to access said data.

Matt Thompson

  • PhD in chemical engineering with Peter Cummings, with no biology whatsoever. Studied properties of ionic liquids. Didn’t care much for FFs. Worked on classical atomistic MD (GROMACS+LAMMPS) of materials.

  • Software scientist with OpenFF. Working here for 6 weeks. Working on small tasks to get onboarded.

  • Will work on creating System object. Expecting cooperation with MosDef group/GMSO code. Will be highly interoperable with both MD engines and machine learning frameworks.

David Dotson

  • From St. Louis. PhD on large protein systems. Worked in healthcare industry and did devops/data engineering on the side.

  • Met with Daniel Smith at SciPy, heard about OpenFF.

  • Started at the same time as Matt, working 50% times. Working on implementing proper torsion interpolation.

  • Looking at accelerating performance of property calculations.

Hyesu Jang

  • From Korea. Now grad student with Lee Ping.

  • Started by making a package for RESP calculation.

  • Now focusing on running fits and generating new FF parameters.

  • Currently working with Jessica on improving valence parameters.

Jeff Wagner

  • From Los Angeles. First research internship at national lab modifying LAMMPS. Confused by biomolecules. Wanted to be a doctor

  • PhD at UCSD, worked on a mix of methods development and benchmarking, also some docking/screening.

  • Bothered by inaccuracy in field, disconnect between development and application.

  • Now maintaining/developing OpenFF toolkit, doing general organziation infrastructure stuff.

Jeff Setiadi

  • From Sydney, Australia. Did condensed matter physics. Went directly to PhD, looking at protein-ligand interactions.

  • P-L is hard, would be useful to work on lower-entropy systems.

  • Now working on pAPRika

Jessica Maat

  • From San Diego. Studied math+chem in undergrad. Now 4th year grad student with Mobley.

  • Have worked on a bunch of OFF work. Now looking at trivalent nitrogens. Have done experiments on assigning trivalent nitrogen parameter.

  • Have also made a tool to select diverse nitrogen compounds, dataset generation for FF release.

Tobias Huefner

  • Bachelor’s Master’s, PhD in comp chem in Germany. Focus on drug discovery, biomolecular solvation.

  • Now in Gilson lab. Will be working on benchmarking of docking programs (CELPP).

  • Looking at understanding docking performance as a function of molecular features.

  • Also looking at atom typing, in the context of how it can limit accuracy, and if they can be optimized using physics-based methods.

Development practices

(Day 1)

SB – Writing code fast with Owen, we didn’t do a lot of testing. I’d do PRs and no merge without reviews, and I think that was the right way to go. Other repos, espeically data-focused ones, have a big need for a quality version history, and so we need to handle it on a case by case basis.

HJ – Used single jupyter notebook for generation of datasets; used PRs for QCA dataset submission.

SB - The kind of PRs we were doing for the data curation / choices etc: https://github.com/openforcefield/nistdataselection/projects/1

JW - Justification for QCA submission approach - needed a way to document and keep track of what we did, be able to evolve approach over time. Can later try and capture the best approach.

JH - Took a few rounds to figure out pattern in QCA submission. Information was all spread out. Needed to synthesize it all.

JM - May be useful to have scripts we all use. Functions that are common for generating the JSON, etc.

JW - QCSubmit should be able to capture many of the lessons we’ve learned. Are there specific things we can list that we’ve learned from this?

Confluence slowed to a crawl at this point – All notes will be taken on this google doc and copied to confluence at the end of the meeting

Google drive docs
url

Namespace reorganization

  • Building support for something like from openff import toolkit, evaluator, ...)-- At least covering how we want the final namespaces/imports to look, and maybe getting into implementation of the changeover

  • Full proposal: Infrastructure Architecture

    • Before I finalize everything with releasing the re-branded OpenFF Evaluator framework and commit to the new API naming conventions, I wanted to suggest we should invest some time to cleanup the software stack offered by OpenFF.While everything exists under the same GitHub org, there is almost no consistency between our packages. This will only get worse over time, and equally, will only get much harder to reverse as the user-base expands.i.e currently we have

      Code Block
      from evaluator import ...
      from openforcefield import ...
      from cmiles import ...
      ...

      while it would be much more cohesive to have an overall architecture similar to

      Code Block
      from openff.evaluator import ...
      from openff.toolkit import ...
      from openff.fractal import ...
      ...

      In practice this seems obtainable through an implicit namespace file structure like https://packaging.python.org/guides/packaging-namespace-packages/#native-namespace-packages while still maintaining individual repositories. This style of architecture / design would seem to lend itself to creating smaller, more focused repo's / packages (similar to more of a set of software 'microservices').I understand this would initially cause a large amount of disruption and possible confusion among users, but the end result would be a cohesive, elegant stack, with all the software we build being connected and identifiable under the same umbrella. Moreover, I believe it would push us to build software which more rigidly follows a single responsibility pattern, rather than monolithic packages which 'do everything' which the toolkit seems to be heading towards (especially if it simply just absorbs things like fragmenter and the QC submission frameworks).

      It would be fantastic to start moving away from a style similar to a zip file of disconnected tools, and to start planning longer term about how we want our software to look and be interacted with.

(Day 1)

JW: 

  • Opening discussion about namespace refactor. Simon a few months ago wanted to unify the importing of different OpenFF repos to look similar

    • from cmiles import …

    • from openforcefield import …

    • from evaluator import …

  • Gives no indication that they’re under one umbrella, can be confusing to new users!

  • Simon’s proposal is roughly based around using a common namespace:

    • from openff.evaluator import …

    • from openff.toolkit import …

  • Major limitations:

    • Can only have one path to import a given package

    • Migrating a repo into this namespace is kinda irreversible

JRGP: Might be some tricky bits at implementation level - namespace/folder collisions? How to let the user know which ones are installed?

DD: Did this in datreant, some limitations, one is that you will and can never get anything out of “import openff”

  • Will this also work with versioneer?

  • Will this work with our packaging strategy?

  • Some projects have a loose namespace strategy of naming everything “django-<thing>”, “pytest-<thing>”, which is a convention that everything follows but isn’t necessarily a formally unified namespace, or “from dask import distributed”


JRGP: https://github.com/pytest-dev/pytest/issues/1629#issue-161422224

12:11

pytest plugins register mechanism? https://docs.pytest.org/en/2.7.3/plugins_index/index.html

MT: What other orgs/projects have done something like this?

  • Pytest did something similar with “py” but it may be moved away from

  • Simtk?

SB -- 

  • Stub namespaces may also be helpful

  • I suspect that larger orgs may do this internally, but we just don’t see this in open source

  • Would be interesting to see how OE does it internally

JRG -- 

  • OE seems to be a single big project with subpackages

  • Unified namespace would be a good organizational move

  • Agree with DD’s idea for “smart registry”

SB -- 

  • So, “registry” approach would make a separate repo that uses something like an entry point, and contains no other code

JW: Maintaining reverse compatibility is Important

SB -- Not sure about putting everything under OFF repo, since that’s associated with toolkit 

(General) -- Do we want “from openforcefield import” or “from openff import” 

(General) -- “openff”

JW -- Is there anything that shouldn’t be under openff namespace or OFF github org?

SB -- “research” code should be under non-openforcefield org/namespace. Only packages that we’re committed to maintaining should be under FF org/namespace. 

JW -- I’d like to move toward a model where anything in the github org is anything we’ve agreed to maintain or we’ve deliberately archived.

SB -- agree that things in our org space should either be stamped that it’s maintained or archived.

MT -- Red and yellow badges are things we feel a certain responsibility to move them toward yellow or green; the organization boundary helps determine what we will expend software scientist effort on.

JRG -- I think there’s a sweet spot here according to this split; smart folks may disagree on splitting. Google has a github org, but it’s a mess. It’s a bit of a wild west. How do we pursue something that still has our name, but also gives us freedom to continue to progress?

  • Clear signalling of status of package, such as “research”, “infrastructure”, “archived”

SB -- Wasn’t sure if a different org for “OpenFF studies” makes sense; not sure where to draw the lines, how much segmentation necessary.

MT -- clear delineation of what’s active and archived could go a long way. Can definitely see argument that an org with a hundred repos can look messy; but not clear at what point that is problem?

JW: Software that is actively maintained stays with the org, exploratory stuff is in individuals or lab orgs. Keep an internal list of repos that lists if each is active

JRG: Likes the idea of a “cluster” of organizations, with various spinoffs for different things, and keep each org fairly trim of repos. So one main one for core software, another for papers, etc. various research spinoffs. Should we come up with practices/patterns to keep up with for “outlying” repos?

DD -- Overhead of categorizing all of these is hard. Maybe we should have tags like “research” “infrastructure” “archived” “dataset” that we can attach to these projects. GH org webpage can let you filter by tag. 

SB -- Agree. I also wonder about repo names -- Can we standardize on package naming patterns? 

MT -- Would that make us distinguish between software and data repos?

SB -- Yes

MT -- Maybe the distinction is possible

Summary -- 

  • We should aim to make a shared OpenFF namespace to enable “from openff import toolkit, evaluator, …”

    • It would be best to do this in a reverse-compatible way (so we could still do “import openforcefield”)

    • This may be possible using stub repos/entrypoints in the short term (wouldn’t require changing the main repo)

    • Dotson, Thompson, Boothroyd will look into feasibility of this approach 

  • We should apply weak pressure to bring repos into OpenFF GitHub organization, so that they don’t become orphaned after the maintainer leaves. 

    • However any preference from the original owner to not migrate to the OpenFF org should take precedence.

  • Repos under the OpenFF org should either have the software science team bring them to green docs+CI status, or be clearly identified as archived

  • We should apply tags to repos so that users can look at the OFF org page and filter down to repos of interest

  • Undecided: Do we want many orgs to separate infra vs. papers vs. datasets? Or a table of contents repo in the OFF org to point to all relevant internal and external repos?


(Day 2)

  • (if ready) stub repo proof-of-concept

    • Stub-accessed evaluator doesn’t have version, but this probably isn’t too important 

    • Will this work with pkg_resources?

    • Can we do this, avoiding importing all?

  • Which packages should be included?

  • What should namespace be called? Openff?

  • What is the time window for migration?

(asynchronous/later in day) -- Implicit namespace work

PEP420

Problems with _version.py → Need to change setup.cfg to have versioneer look in module, instead of top-level package

Made implicit namespace test packages. To install:

conda create -n test -c omnia/label/beta test_package_a test_package_b

To run:

from test_org import test_package_b.bar

bar.Bar()

Stubby metapackage

SB -- Could define a “magic path” in metapackage __init__.py

Many modules for different parts of OpenFF toolkit

Ex. from openff import topology

Offmol = topology.Molecule


Disadvantage if metapackage -- 

  • Would need to be kept in sync with OFFTK package structure

  • Evaluator imports might need to be either all or by-module

Advantage -- Would let us split off OFFMol, OFFTop, OFFToolkitWrapper into “openforcefield-core” package, and users wouldn’t need to change import statements

Could just move everything in OFFTK to openff top-level dir, but ALSO have an openforcefield directory hierarchy with stubs that redirect to the new code’s location

  • Which packages should be under OFF namespace?

  • Toolkit

  • Evaluator

  • Openforcefields

  • smirnoff99Frosst

  • QCSubmit (when officially released)

  • System (when released)

  • CMILES?

  • Fragmenter?

  • Chemper?

  • Smarty?

  • Taproom (Talk to setiadi/slochower -- coming to OFF organization)

  • PLBenchmarks software (eventual)

  • PLBenchmarks data (eventual)

  • Maybe’s -- Not part of OFF org, maybe will migrate in later? Talk to author

    • Perses? (Talk with Hannah)

    • Freeenergyframework? (talk with Hannah)

    • Gimlet? (Talk with Yuanqing/John -- Probably no)

  • No’s

    • Openbenchmarks (no benefit / not much to import)

  • What should namespace be called?

    • openff

  • When should the migration happen?

Determining best practices for QC dataset naming and organization

(Day 1)

SB - if we store rationale for the dataset alongside metadata (e.g. date, CMILES, etc.), that is both something you know and is valuable later. Name becomes a bit irrelevant then.


JW - 2019-07-02 VEHICLe gives a good example what we probably want in a dataset README metadata-wise


JH - if we took some of the metadata in e.g. that README and put it in e.g. JSON then it would be usable as metadata programmatically


JW - perhaps store a URL to the README? So it’s easy to find later? Not certain of best approach. Perhaps we should include raw markdown in metadata submission?


TG -- I think we should figure out a way to roll our own datasets. Kinda what’s being done now, but need to go through Ben. If we just kind of choose a dataset we want and just keep adding to it, that might be more helpful.


JW -- If you have a dataset with a name, was it easy to find in qca-dataset-submission


TG -- I just go to get collections in the client; I don’t use qca-dataset-submission at all -- not an accurate representation of what’s in the database. If there was one unified dataset, that would solve all issues.


HJ -- Next release pull-down will be around 300-400MB


TG -- In addition to that, we could make a dataset that contains the molecules that were complete at time of download for fitting


JW -- So, we could look at the complete molecules that Hyesu pulled down for this release, and retroactively collect them into a dataset. This could help us refine the dataset labeling process later.


TG + HJ -- Hessian dataset labeling will be complicated, but doable.


(General) -- Do we want the tarball to focus on:

  • Exactly get the same output as we got from the force field fitting

  • Provide re-usable infrastructure for other people to do our _style_ of fit


JW -- Are these mutually exclusive?


SB -- I don’t think so. We could make infrastructure to handle all the different steps.


DD -- Having a snapshot of the data + code + dependencies would be useful.


TG -- A more complete solution would be to mirror/tar up a snapshot of the whole QCA when we do these fits. 


DD + SB -- Disagree. It should be sufficient to store the result of the QCA pulldown.


To do(Will discuss this at 11:01 Pacific)

  • Hyesu will send 1.2.0 fitting data to Gokey/Dotson so they can verify IDs are present and that dataset could be labeled. 

  • Continue discussion on naming

(Day 2) QCSubmit prototype:

  • TG -- How does this handle QCSchema changes?

  • DD -- You could add an option to target a different version of QCSchema

  • TG -- So, instead of dumping out to a JSON blob, we’d submit the schema?

  • JH will send notebook to HJ

  • CLI discussion

    • We should use click for argument parsing

    • Josh should just go ahead and build a CLI -- We don’t have the manpower in the short term to derive long-term CLI practices/common flags/verbiage. 

  • Provenance information

    • TG/JW -- We could have a `dataset.metadata[‘long_description’]` and `dataset.metadata[‘long_description_url’]`, `submitter`, etc.

    • DD -- Could have required fields defined in the class

    • JH -- I could make that 

    • We will open an issue on QCSubmit for which fields to make required

    • Issue opened at

      Github link macro
      linkhttps://github.com/openforcefield/qcsubmit/issues/3

Migrating packages over to GitHub Actions and unifying under one OE license

Reorganizing/defining/consolidating the many development-related slack channels

  • Archive maintainers and move traffic to into infrastructure OR rename “Maintainers” → “developers”?

    • Rename maintainers to developers

  • Several other ideas discussed, but none adopted. Current Slack channels seem to be working.

Deciding upon a consistent approach and theme for each repos docs

  • Toolkit docs

Making a contributor community
  • are hard to navigate

  • Evaluator uses standard docs

  • Want to break into tutorials, developer docs, API

  • Napoleon format allows nice cross-references

  • Can turn notebooks into documentation pages w/ colab links

  • KinoML uses mkdocs -- Very nice, but somewhat immature

  • MT -- Can we break this into discrete questions?

  • SB -- Three areas

    • Theme

      • Material

      • RTD

      • Sphinx-bootstrap

      • QCFractal (descendant from RTD)

      • Theme working group: JW, SB, DD, MT, JRG, KCJ

      • Working group will make proposal and infrastructure team will open PRs for changes in all repos

    • Content

    • Engine (kinda related to themeing)

      • Mkdocs (requires google-style docstring, might support numpy soon, doesn’t support cross-links

      • Sphinx (supports Material/RTD/sphinx-bootstrap)

      • We will continue using Sphinx

    • Hosting

      • ReadTheDocs (current, problematic/build status isn’t reflected on pull request. For-profit, could turn off free tier at any time)

      • GHA Actions/Pages

      • We should switch over to Github Actions/Pages -- KinoML repo is a good example of this. We’ll have a meeting later this year to show the steps towards making this switchover, and infrastructure team may start opening PRs to make this changeover. 

  • JRG -- mkdocs is really nice, but they don’t support anything other than google-style, and the infrastructure is a bit green

  • JRG -- https://bashtage.github.io/sphinx-material/ is a sphinx plugin for mkdocs

  • (general) we should check with Karmen about whether we can get a template to match with OFF website

  • Discuss down the line -- Do we want our own cookiecutter? Do we want to get one of us as a maintainer on the existing cookiecutter?

First Annual Devs Week Feedback

  • Could have optional sessions and provide schedule ahead of time. Takes a lot of mental stamina to follow less relevant topics. 

  • Technical discussions were interesting, but not directly applicable. Channel naming discussion wasn’t that important. 

  • Put together proposals for each topic -- Present ideas and options clearly. Frame things as “here’s what we want to do, do you have objections?”

  • Could have discussions dedicated to early career researcher topics. Also unstructured social time would be very helpful.

  • Career perspectives from older folks? 

    • Probably doesn’t need to be structured/scheduled. This would come up in informal discussion.

    • Chris Bayly story time?

  • Breakout/hacking sessions would be good. Could have sessions where people present problems and we work together to solve them.

Action items

  •  

Decisions