Closed FFY00 closed 1 month ago
Would it be fine to add a dependency on exceptiongroup for Python < 3.11?
Yes, but I was thinking just now about what would make more sense ExceptionGroup
or a custom exception. I'm not sure.
Okay, I think ExceptionGroup
makes more sense since ConfigurationError
has a key
attribute, we're just not using it everywhere we should.
Okay. I've really enjoyed using ExceptionGroup in scikit-build-core. I can try to get to this in a week or two unless you get to it first.
How did you handle the gathering of all the exceptions? Thinking about it gave me a bad idea (tm).
The current code isn't very prepared for it. We'll probably want to subclass ExceptionGroup
to allow adding more exceptions, and provide a context manager to catch and register more exceptions.
Since we have an API for data fetchers, which backend authors are supposed to use to support their own configurations, I'd like to provide a better way to handle this anyway, so it might make sense to go with a more polished solution if we can think of anything.
Would it be fine to add a dependency on exceptiongroup for Python < 3.11?
Please do not do this, that is yet again more breakage for non-isolated builds. Either vendor it, or don't use it on <3.11. Breaking non-isolated build users so they have to go hunt down this issue/PR just for formatting multi-line exceptions more nicely is a really poor tradeoff. We've been over that multiple times now, and real-world issues get filed.
Why would this break a non-isolated build? If you install pyproject-metadata
in the non-isolated environment, then you get its dependencies too. It already has a dependency - packaging
. If you add exceptiongroup
, then it's a dependency, and it gets installed when you install this package. If it was added via a PEP 517 build hook, then that wouldn't get run, but it's not (this is not even a PEP 517 builder). Other ecosystems (like conda-forge) have dependencies too, so it won't break those - installing this package installs it's dependencies, that's how packaging works. I fail to see a situation where this breaks - maybe if you are making a zipapp and you are using 3.11, it would fail to bundle exceptiongroup
(and because of that, I always bundle for zipapp on the lowest supported Python anyway).
Other ecosystems (like conda-forge) have dependencies too, so it won't break those - installing this package installs it's dependencies, that's how packaging works. I fail to see a situation where this breaks
You are right here - when it breaks is if those dependencies get missed. Here is a concrete example I remember: https://github.com/mesonbuild/meson-python/issues/248#issuecomment-1363814274.
When you have a version-dependent new dependency, repackaging tools for other ecosystems don't get that right, and it's very easy to miss in review: if I see a new PR on the conda-forge feedstock for this pyproject-metadata
and CI is green (which is likely, because there's only a single CI job for noarch packages, not one per Python version), then I'll just merge it and move on. Which will then be followed by a bug report.
And then there's the regular cost any dependency has - a larger dependency graph and just one more place for churn and potential bugs.
All the above to say: this does have real costs, and these kinds of trivial dependencies for non-essential functionality are bad for maintainability and robustness.
A third party packaging system is supposed to get dependencies right, and if they don't, it's not our fault. Vendoring a dependency means we have to maintain the vendored copy, using it is non-standard, and tools that integrate with exceptions (pytest, pycharm, ipython, etc.) are a lot more likely to properly support the standard library backport than a custom vendored copy inside some library somewhere. Making it optional also has a non-zero cost in adding extra code and non-standard, more complex structures. And both of these costs are continuing costs that will at least last 3-5 years - getting the conda-forge repackage correct is a one-time cost, and it's something that that's basically their fault for repacking the entire ecosystem with potentially different names and being pretty bad about statically checking to see if the requirements match.
I maintain a backport of a standard library feature is a low cost addition in terms of long term maintainability, and worth the (minor and temporary) headaches for repackaging.
I maintain a backport of a standard library feature is a low cost addition in terms of long term maintainability, and worth the (minor and temporary) headaches for repackaging.
I agree with "low cost".
That said, I'd also say this is "'low gain" (at best). It's a very minor convenience in the rare case that you make multiple mistakes at once in pyproject.toml
. My opinion is still that the costs outweigh the gains here. This isn't important, and I don't like more dependencies for my build dependencies.
As someone who also has to worry about actually paying for packaging engineering effort: I'd just close this and move on, it is too low-prio to work on. All three of us here have better things to do imho.
Currently, we go through the validation steps and just raise an exception, this results in us aborting the validation and only the first error being reported. It would be good if we could go through all the validation steps and reporting all the errors we find, so that users can be informed of all issues at the same time.