You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This idea is not specific to pixi, but describes a feature I've never seen any package manager implement. That said, pixi seems to be one of the most actively developed and innovative package managers currently, and I bet a lot of people here are interested in making package managers better, hence posting here 😄
From my perspective, a lot of the pain of package managers comes from the fact that compatibility -- which versions of X are compatible with which versions of Y -- is always expected to be metadata on the depending package, not the last-published package. When these are not the same, there is really no way to do it right. You have to take risks of (a) letting people install package combinations that don't work and (b) not letting people install package combinations that do work, and anything that reduces the risk of one increases the risk of the other.
If I am publishing package A that depends on package B, at the time I hit publish I am asked what is the last version of B that will work with my current version of A. I can obviously test up to the latest extant version of B, so that is a good lower bound on my answer to that question. If I really trust B's SemVer practices, and only use its public API, then I can additionally predict that any patch releases on the same minor version will also work. But that doesn't mean that the next minor (or even major) version won't work, it only means that it might not. If I conservatively place my upper bound to not allow the next minor version, then in the scenario where it turns out to be compatible, I could prevent my users from installing a working combination. If I place my upper bound more leniently (or don't place one at all), package B could publish a new breaking change and my users would instantly start installing an incompatible combination.
There is some asymmetry to how bad these things are. If a package manager lets you install something that doesn't work, you can always fix it: e.g. by specifying a version of B. Whereas, if a package manager doesn't let you install something even though it would work, there is usually no way around that without seriously monkeying with internals. Therefore, it is typically more problematic to make the second kind of error. But both are irritating, and fundamentally only impossible to avoid because our solving approaches neglect to take time into account.
The most workable solution, to my mind, is to treat these unknown upper bounds as something that must be crowdsourced from users (since package maintainers probably don't want to be responsible for testing with all downstream packages).
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
This idea is not specific to pixi, but describes a feature I've never seen any package manager implement. That said, pixi seems to be one of the most actively developed and innovative package managers currently, and I bet a lot of people here are interested in making package managers better, hence posting here 😄
From my perspective, a lot of the pain of package managers comes from the fact that compatibility -- which versions of X are compatible with which versions of Y -- is always expected to be metadata on the depending package, not the last-published package. When these are not the same, there is really no way to do it right. You have to take risks of (a) letting people install package combinations that don't work and (b) not letting people install package combinations that do work, and anything that reduces the risk of one increases the risk of the other.
If I am publishing package A that depends on package B, at the time I hit publish I am asked what is the last version of B that will work with my current version of A. I can obviously test up to the latest extant version of B, so that is a good lower bound on my answer to that question. If I really trust B's SemVer practices, and only use its public API, then I can additionally predict that any patch releases on the same minor version will also work. But that doesn't mean that the next minor (or even major) version won't work, it only means that it might not. If I conservatively place my upper bound to not allow the next minor version, then in the scenario where it turns out to be compatible, I could prevent my users from installing a working combination. If I place my upper bound more leniently (or don't place one at all), package B could publish a new breaking change and my users would instantly start installing an incompatible combination.
There is some asymmetry to how bad these things are. If a package manager lets you install something that doesn't work, you can always fix it: e.g. by specifying a version of B. Whereas, if a package manager doesn't let you install something even though it would work, there is usually no way around that without seriously monkeying with internals. Therefore, it is typically more problematic to make the second kind of error. But both are irritating, and fundamentally only impossible to avoid because our solving approaches neglect to take time into account.
The most workable solution, to my mind, is to treat these unknown upper bounds as something that must be crowdsourced from users (since package maintainers probably don't want to be responsible for testing with all downstream packages).
Curious to hear others' thoughts!
Note: most of these ideas copied from snakemake/snakemake#1989
Beta Was this translation helpful? Give feedback.
All reactions