Closed valgur closed 2 months ago
Hi @valgur
I am starting to move that code to an unit-test, and I have found that, simplificando:
gcc = textwrap.dedent(r"""
from conan import ConanFile
class Pkg(ConanFile):
name = "gcc"
version = "1.0"
package_type = "application"
def package_info(self):
self.buildenv_info.define_path("MYCC", "mock-gcc")
""")
pkga = textwrap.dedent(r"""
from conan import ConanFile
class Pkg(ConanFile):
settings = "os"
tool_requires = "gcc/1.0"
package_type = "shared-library"
def requirements(self):
self.requires("gcc/1.0", headers=False, libs=True)
""")
I think that such self.requires("gcc/1.0"
is not correct, it doesn't make sense that a shared-library requires
an application. If you want gcc
to be used in pkga
it should be a tool_requires
. But the concept of propagating applications in the "host" context to be used somehow downstream is not considered.
The effect that you are seeing is that buildenv
is not exclusively to be propagated in the "build" context, but there are some cases of libraries in the "host" context that need to propagate something via environment to the consumers so they can be used (linked). Thus, it is important that applications to be used as tools are tool_requires
not regular requires
.
Thanks for looking into this!
I made the minimal example more realistic by including a mock libgcc.so
that is implicitly used by package-a
, that should be similar to how the real one behaves. package-b
now fails when self.requires("gcc/")
is dropped, as expected.
Using a tool with both self.requires()
and self.tool_requires()
is a quite common pattern, though, right? It's frequently used for tool+library combination packages like protobuf
, glib
, qt
, etc. The main difference for them is that all of these set package_type = "library"
, at least on CCI.
It sounds like the real solution for a case like this is to split the package into a separate library and application one, i.e. libgcc
and gcc. This would also solve the pain point of an
applicationpackage needing to
del self.settings.compilerand possibly
del self.settings.build_type, while these would preferably be kept around for a
library`.
Still, is there really a use case where a self.requires(..., run=True)
propagating buildenv variables (as opposed to runenv vars) makes sense? I would expect a self.tool_requires(..., visible=True)
to be required for that.
Also, if an application
should not be used in a host context for libraries, it should maybe be made more explicit in the API. It seems kind of inevitable that someone else stumbles upon this footgun.
Using a tool with both self.requires() and self.tool_requires() is a quite common pattern, though, right? It's frequently used for tool+library combination packages like protobuf, glib, qt, etc. The main difference for them is that all of these set package_type = "library", at least on CCI.
Yes, exactly. We even added the self.tool_requires("pkg/<host_version>")
syntax to help tracking the same version in both build and host contexts.
It works fine with the package_type = "library"
because that is good for the requires()
case, and the self.tool_requires()
already forces enough trait changes to consume things as if they were applications, because it is very enforced by tool_requires
definition.
It sounds like the real solution for a case like this is to split the package into a separate library and application one, i.e. libgcc and gcc. This would also solve the pain point of an applicationpackage needing todel self.settings.compilerand possiblydel self.settings.build_type, while these would preferably be kept around for a library`.
We just added the self.tool_requires("pkg/<host_version:otherpkg>")
syntax to support this use case as well (2.0.15 to be released soon)
Still, is there really a use case where a self.requires(..., run=True) propagating buildenv variables (as opposed to runenv vars) makes sense? I would expect a self.tool_requires(..., visible=True) to be required for that.
Unfortunately yes. Environment variables have been largely abused in the C++ ecosystem to propagate information from regular dependencies both static and shared (run=True) to be able to find and link them at build time by their consumers. This is one of the reasons we allowed buildenv_info
definition in regular "host" dependencies.
Also, if an application should not be used in a host context for libraries, it should maybe be made more explicit in the API. It seems kind of inevitable that someone else stumbles upon this footgun.
There are some use cases of recipes gathering multiple applications for deployment, generally with an "unknown" package type, but the unknown type as been long time associated to libraries. It is not that straightforward to diagnose this situation without generating false positives for those legit cases, but it is totally true that trying to detect it and report it somehow could be helpful and beneficial.
Environment details
Steps to reproduce
A minimal example: https://github.com/valgur/conan-buildenv-issue
Using a package with
package_type = "application"
as a host-contextself.requires()
dependency without explicitly addingrun=False
results in buildenv environment variables being added to all downstream packages as well.This came up when trying to use
gfortran
from agcc
package together with itsgfortran
andquadmath
runtime libraries to buildopenblas
. The unexpected silent propagation ofCC
andCXX
vars fromgcc
was caused some of the downstream package builds to fail.I suppose this could be considered the intended behavior for
self.requires(..., run=True)
, but it's nevertheless unexpected and looks like an easy pitfall to me.There are two aspects to this issue:
package_type = "application"
default torun=True
when used withself.requires()
?Neither of these look like the correct behavior to me.
Logs
No response