Closed h-vetinari closed 1 year ago
Doesn't the migrator need to do something with py-boost too? Maybe be replace boost?
Doesn't the migrator need to do something with py-boost too? Maybe be replace boost?
I presume it could, but we could also solve the py-boost renaming separately.
I would prefer libboost-python
over py-boost
since it would follow the naming convention used for boost libraries built separately such as libboost-mpi
. And Boost.Python is not a binding of boost but a library to write python bindings, I believe py-boost
would be confusing.
Recording what was discussed in the core-call (please correct me if anyone feels differently):
py-boost
to libboost-python
I think libboost-dev for the headers would be most linux like?
Ahhhhhhh. You are right. Ignore me!
No worries. It's good to explore the space a bit. Personally I like to have some variation of "-headers" in the name, so that it becomes very clear in the meta.yaml
that the intent is to only use the headers.
Run-exports are a topic that's really hard to penetrate for newcomers, but at least I think it's more likely they'll understand that something that's only headers is only needed at compilation time.
If we don't mind longer names, we could even do libboost-header-only
(perhaps people will have heard of "header only" libraries -- boost uses this formulation in their docs as well)
I updated the summary in the OP to reflect libboost-python
(as discussed) & libboost-header-only
(which is now my preferred suggestion).
fwiw, Boost::headers
is the CMake target for their header-only libraries (https://github.com/boostorg/boost_install/blob/develop/BoostConfig.cmake#L17-L18).
Hm. Usually -dev gets you the headers plus the binaries and everything else, no1? And we're IMO doing something different, which is "just the headers"...
FWIW, Debian and Fedora both seem to stuff the "headers only" components in their corresponding -dev
/-devel
packages.
However, I don't think that pattern works for conda packaging because that would make the run_exports
a bit wonky. Maybe we should have a libboost-dev
package containing the development binaries, which depends on libboost-headers
and libboost
(the runtime components and also the run export)?
I'm not 100% sold on the headers-only
naming convention, mostly because some of those headers will actually need binary DSOs to be useful at build and runtime.
I actually thought about reusing -dev
in the way you described, though why I didn't end up proposing it is that it would be completely counter to how we're currently using library patterns everywhere (i.e. libfoo
always comes with headers). I'm not sure we want to go that way TBH, even though we'd maybe design it like that on a green field.
I'm not 100% sold on the
headers-only
naming convention, mostly because some of those headers will actually need binary DSOs to be useful at build and runtime.
So to my mind, libboost
would host- & run-depend on libboost-header-only
, meaning that we always get the required headers when someone uses libboost
. In turn, if you need a binary at runtime, libboost-header-only
is the wrong host-dependence for that package (by design, because then you need the run-export)!
I'd prefer to use the boost feedstock for consolidation and archive this one. That will enhance discoverability in the future.
The boost feedstock would then be headers only while all the libs that need to be built will have their own packages? IOW, the libboost-xxx
would package only the libraries, not the headers and depends on boost
that would provide all the headers (only?). I see advantages in having separate package for the libraries, for instance they can have their own variants (libboost-mpi
is build for several mpi variants, libboost-iostream
could be built with several compression variants). The build matrix would be unmanageable in a single package.
I wonder if a project that needs the latest libboost-xxx
but also requires a python package that depends on a Boost.Python version that is pinned globally will be possible? Ideally, the Boost.Python lib would be statically linked in python extension packages to remove the dependency and solve this issue...
All the 3 packages mentioned in the OP can be built in a single feedstock. technically speaking it doesn't matter if it's this one or the other, but "boost" is a better feedstock name than "boost-cpp" (especially if there's no more package of that name; though we have this situation elsewhere as well...)
I wonder if a project that needs the latest
libboost-xxx
but also requires a python package that depends on a Boost.Python version that is pinned globally will be possible?
In general everything that depends on the binaries would need a corresponding migration to move to a new (resp. the newest) version, but any feedstock can use conda_build_config.yaml
to opt into newer versions.
Ideally, the Boost.Python lib would be statically linked in python extension packages to remove the dependency and solve this issue...
In conda-forge, we generally prefer shared libs (and migrating where necessary), so I doubt this is ideal here.
Over in the land of opentelemetry, the header-only install has an -api
suffix by upstream convention. I don't think this is too bad, but I do prefer the explicitness of -header-only
. What I'd like to avoid is yet again having divergent names for the same thing (header only versions of a library) within conda-forge.
I haven't heard much more responses here. Should we do an informal poll (multi-vote possible)?
libboost-header-only
/ libopentelemetry-cpp-header-only
/ ...libboost-headers
/ libopentelemetry-cpp-headers
/ ...libboost-api
/ libopentelemetry-cpp-api
/ ...Perhaps -headers
is the best trade-off?
Implementation PR is ready for a first round of reviews: https://github.com/conda-forge/boost-feedstock/pull/164
Once we're converging towards agreement there, I'll try writing the piggyback migrator.
Something is still not clear to me regarding runtime vs devel dependencies. For instance, with the current implementation,
$mamba create -n test pytango boost-cpp=1.82
Looking for: ['pytango', 'boost-cpp=1.82']
Could not solve for environment specs
The following packages are incompatible
├─ boost-cpp 1.82** is requested and can be installed;
└─ pytango is uninstallable because there are no viable options
├─ pytango [9.3.3|9.3.4|9.3.5|9.3.6] would require
│ └─ boost >=1.74.0,<1.74.1.0a0 , which requires
│ └─ boost-cpp 1.74.0.* , which conflicts with any installable versions previously reported;
└─ pytango [9.3.5|9.3.6|9.4.0|9.4.1] would require
└─ boost >=1.78.0,<1.78.1.0a0 , which requires
└─ boost-cpp 1.78.0.* , which conflicts with any installable versions previously reported.
why boost
depends on boost-cpp
since it does not depends on any other Boost (built) libraries?
At runtime, a python extension only requires the libboost_python*
libraries that match the version it was built with -and the python version. In the other hand, building a python extension requires libboost_python*
and the matching headers. Is there any chance to capture this?
With the new implementation, I wish installing a python extension that depends on libboost-python
global pinning (1.78) and a more recent version of the libboost-headers
would be possible.
why
boost
depends onboost-cpp
since it does not depends on any other Boost (built) libraries?
Because previously there was no way to only depend on the headers (or anything but the full shebang).
[...] Is there any chance to capture this?
https://github.com/conda-forge/boost-feedstock/pull/164 does this - thanks for confirming that the link check did its job correctly (causing me to switch libboost-python
to only depend on libboost-headers
rather than the full libboost
).
With the new implementation, I wish installing a python extension that depends on
libboost-python
global pinning (1.78) and a more recent version of thelibboost-headers
would be possible.
This would be easily possible as follows, but isn't yet implemented in https://github.com/conda-forge/boost-feedstock/pull/164, mainly because a) I didn't think about it and b) I'm not sure (yet) that there's definitely no scenario where this could break.
--- a/recipe/meta.yaml
+++ b/recipe/meta.yaml
@@ -186,11 +186,11 @@ outputs:
host:
- python
- numpy
- - {{ pin_subpackage("libboost-headers", exact=True) }}
+ - libboost-headers >={{ version }}
run:
- python
- {{ pin_compatible('numpy') }}
- - {{ pin_subpackage("libboost-headers", exact=True) }}
+ - libboost-headers >={{ version }}
run_constrained:
# make sure we don't co-install with old version of old package name
- boost ={{ version }}
I'd prefer exact pins for now with the idea that we can relax them later if it is working.
There's a bunch of discussions that touch upon changing our distribution of boost in some way. All of these come with quite some effort, but I believe we could solve them with essentially one PR plus a special migrator.
Open discussions I'm aware of:
[^1]: where the following was said: "
-cpp
was a trend that I started withboost-cpp
. That was a mistake. I'm in favour of changing it [...]"Rough idea:
boost-cpp
-->libboost
, and add a run-exportlibboost-headers
discussion & a (basically unanimous) voteboost
from https://github.com/conda-forge/boost-feedstock (which already depends on boost-cpp)py-boost
to match with Anaconda?~ Rename it tolibboost-python
, at least in conda-forge - I also think this name would be much clearer to communicate that it's for boost's python bindings, rather thanboost
itself.boost-cpp
only in host, turn intolibboost-headers
. If also in run, remove it there but uselibboost
in host.currently
currently
proposal
boost-cpp
libboost
libboost
libboost-headers
boost
py-boost
libboost-python
Am I overlooking something? Any thoughts/comments? @conda-forge/boost-cpp @conda-forge/boost @conda-forge/core