JW – Does adding this implicitly endorse pip installation?
MT – For reference: Conda is both package manager and venv solution. This isn’t broadly the case in python-world. Adding this doesn’t directly endorse pypi deployment (until there’s a package on pypi itself). What this would do is have me spin up new CI using a virtual environment manager rather than conda. So running pip install . would fetch deps from pypi. But we wouldn’t document this in install instructions so users wouldn’t have expectation that pip installing would bring in deps. But this specification would be totally separate from conda recipe.
MT – Right now we have devtools/conda_envs formally uncoupled from feedstock meta.yaml. ~At the end of the “define deps in pyproject and install using pip” road is having only one place where deps are defined (pyproject) and having the conda feedstock infer deps from that.~ (maybe not entirely true, would still require manual cross-checks, or at least automatic synchronization may not be worth it).
MT – Overall I think having deps defined both in conda yamls and pyproject, and testing both build methods, is a net positive.
JW – Two things being discussed:
To the extent that having pip builds helps us experiment with going to pip distribution, this could be worthwhile. But if we’re not going to pip distribution, I don’t know whether the marginal cost of pip builds is worthwhile. And the only way I can see us ensuring the correctness of pyproject dep definitions is by having pip builds.
JW – Is there value in defining deps in pyproject even if we don’t migrate stack to pip
MT – Some inherent value to defining deps in pyproject. Compared to conda, pip/uv/etc is very fast and more widely adopted. Marginal addition to CI workflow is ~20 lines. Worth separately considering cost of putting things on to pip - eg not doing evaluator and recharge, but yes doing simpler deps. Eg openff-units on pypi just requires pint, numpy, openff-utilities.
JW – Two things that make me nervous are:
Users who would have been able to do something they wanted with a conda install might try the pip one, then fail to eg. get bespokefit going and walk away
Distributing a package on pip could confine our development options (eg if we needed to replace QC backend in bespokefit with something non-pip-installable, then we’d either need to make development decision based on pip package availablility or deprecate pip package)
MT – These are valid points but don’t outweigh the benefits.
JW – What about a case where we add new functionality that requires a conda dependency, but the package is already pip-only?
MT – More realistic is that upstreams get abandoned and goes unmaintained. This already happens. Unmaintained things can be installed until they slip out of version compatibility.
…
MT – If an upstream has a falling-out with c-f, then we’ll need to scramble to find a solution, but that doesn’t prevent us from depending on c-f packages.
MT – Unmaintained packages aren’t an immediate fire. We’ve got several currently unmaintained upstreams that we have months/years to resolve.
MT – Tooling and automation for shipping to pypi is much cleaner than c-f. Most big projects automatically deploy to pypi on release.
MT – re: bespokefit not being pip-installable - if a user tries to pip install it and they get a message that it’s not available, that’s kinda on the user.
MT – Downstream users/developers would get a ton of value from pip installation. People tolerate us deploying on conda, but many potential users try pip-installing our stack, fail, and move on before checking conda.
MT – While I don’t know the process for a rollout of our stack, I’d start with units and utilities. OpenFE can’t ship GUFE on pypi because they need openff-units.
JW – Approve rolling out some packages on testpypi to see how much of our stack can go over. But don’t let it hit production.