Open tarik02 opened 5 years ago
Parameters can be passed to Buckaroo packages via the command-line at build-time (demo). These can then be read by the BUCK
files of the package in order to configure it.
However, these flags are not part of the resolution process, so it is up to the user to determine the correct set of parameters. This can always be made to work, but it can prevent things from working out-of-the-box.
I think this proposal would be a good improvement.
An open question is how this should work in practice.
It would be good to discuss this before diving into an implementation.
Overview of Rust approach: https://doc.rust-lang.org/cargo/reference/manifest.html#the-features-section
So, there can be problem with this dependency tree:
If we have parameters, this can became a real problem. This can be solved with something like peer dependencies in NodeJS.
So, in this case, package1
will have a peer dependency package2
and root project will have a dependency package2
which will be used by package1
. Well, then package1
will force root project to have a dependency package2
. This approach needs some way to declare and merge features of package2
from root project and package1
. In particular, merging strategy can be defined in package2
(for example set
, array
, etc.).
I think that features should be an object (but for simplicity without objects inside). So, there should be unit types (string, int). Then, the object will have only values of units and arrays (only of units).
About rust's dependencies based on features, we can use this syntax:
[[dependency]]
name = ...
condition = "feature1"
Conflicting feature sets can be checked by BUCK
config, it seems that it is easier to implement them there than in buckaroo.
If we have parameters, this can became a real problem. This can be solved with something like peer dependencies in NodeJS.
In Buckaroo, packages are "peer" (we call them "public") dependencies by default. To have a "private" dependency (like in NPM by default), whose version does not need to be shared with others, you must add private = true
.
So, in this case, package1 will have a peer dependency package2 and root project will have a dependency package2 which will be used by package1. Well, then package1 will force root project to have a dependency package2. This approach needs some way to declare and merge features of package2 from root project and package1. In particular, merging strategy can be defined in package2 (for example set, array, etc.).
Your example is supported by Buckaroo. The manifest for package1
should be something like:
[[dependency]]
package = "package2"
version = "*"
private = true
Conflicting feature sets can be checked by BUCK config, it seems that it is easier to implement them there than in buckaroo.
This is true. It might be nice if a combination of packages that can never work due to the configurations is never picked by the resolver... but this might be expecting too much!
I think that features should be an object (but for simplicity without objects inside). So, there should be unit types (string, int). Then, the object will have only values of units and arrays (only of units).
Interesting. How would integers work with the "condition" syntax?
Forgot to add:
Buck builds lazily, so if you fetch all of the optional dependencies but do not enable the feature then those optional packages will not be built. It is slightly less efficient (since we are "over fetching") but this is not a significant cost compared to "over building".
So, we need some condition syntax, for example:
condition = "feature1" # for boolean features
condition = "feature >= 5" # for integers/floats/versions
condition = "feature == 'abc'" # for previous ones/strings
condition = "'something' in feature" # for sets/arrays
My subjective opinion is that this should be enough. Also, if feature used in condition does not exist, we can consider the whole condition to be false.
Also, then there should be some way to pass features to BUCK configurations and to use them in some convenient way.
Hmm, how are they peer by default? So I can access some functions of package2 from package1 if I a project depends on package1 and package2?
In theory, we can avoid "overfetching":
First download BUCK
file of dependency and then execute it until some point (for example, we will force to check features before adding any libraries or binaries).
My subjective opinion is that this should be enough. Also, if feature used in condition does not exist, we can consider the whole condition to be false.
I like this! It would map cleanly to the Skylark (Python) types in Buck.
Also, then there should be some way to pass features to BUCK configurations and to use them in some convenient way.
This is done using the read_config
function in Buck: https://buckbuild.com/skylark/generated/read_config.html
Hmm, how are they peer by default? So I can access some functions of package2 from package1 if I a project depends on package1 and package2?
They are "public" in the sense that everyone gets the same version, not in the sense of C++, Java, etc. You can only access the targets of a dependency if you explicitly depend upon it.
In theory, we can avoid "overfetching": First download BUCK file of dependency and then execute it until some point (for example, we will force to check features before adding any libraries or binaries).
I think it would be hard to make this work, and it might not be worth it. Remember that Buckaroo packages are source-code, so they are quite small.
A simpler approach would be to move the configuration logic into Buckaroo and then the BUCK files access the values from a configuration file generated by Buckaroo. This maintains the one way data-flow we have now:
Manifests -> Buckaroo -> BUCK files + configurations -> Buck -> Clang, GCC etc.
I think it's better to leave only BUCK
and download the whole package in any case (at least, it can be changed in future).
Also, since we have buckaroo_macros
, we can add functions for features into there.
But also, that type system is not always convenient for configuration in the example:
api = { gl = "3.2", gles = "" },
Maybe we should allow using flat object as a feature?
P.S. Mapped to dict in Python/Skylark
Due to TOML restrictions, the config will look like:
[[dependency]]
package = "github.com/Tarik02/glfw"
version = "branch=fix-windows"
[[dependency.feature]]
name = "profile"
value = "core"
[[dependency.feature]]
name = "api"
value = { gl = "3.2", gles = "" }
[[dependency.feature]]
name = "extensions"
value = [
"GL_EXT_framebuffer_multisample",
"GL_EXT_texture_filter_anisotropic"
]
Does the example you give above work when there are multiple dependencies with different features?
Ah OK, makes sense.
The next question is where the feature flags would be written to after install
. We already write the dependency lists to .buckconfig.d/.buckconfig.buckaroo
, so that is the obvious place.
Yes, I thought about it. I think it is good place for the features. Also, it's needed to modify buckaroo_macros.bzl to add functions to access features list. Features should be package-local (package can only access it's features).
Also, I have a question: is there a way to get know what package are we building from function in .buckconfig.d/.buckconfig.buckaroo
?
Also, it's needed to modify buckaroo_macros.bzl
I think we should explore using just read_config
before extending the macros. Once we add a macro, it is very difficult to change them without breaking packages.
Also, I have a question: is there a way to get know what package are we building from function in .buckconfig.d/.buckconfig.buckaroo?
I don't think so. We could write a "self_name" configuration value or something into this file to enable this.
I think we should explore using just read_config before extending the macros. Once we add a macro, it is very difficult to change them without breaking packages.
@njlr, if we just add a macro, nothing bad happens. Yes, at start we can use just read_config
, but in future, there should be some macro like get_feature_int
(or without type name), etc.
I don't think so. We could write a "self_name" configuration value or something into this file to enable this. That was a wrong question, my bad.
if we just add a macro, nothing bad happens.
Well we are committed to maintain macros once they are offered. If they are provided and people use them, then changing them will break packages. I would call this a bad thing! :slightly_smiling_face:
Well we are committed to maintain macros once they are offered. If they are provided and people use them, then changing them will break packages. I would call this a bad thing! 🙂
Have no words to say xD. At least, if features will be useful, we should consider adding macros for them.
Description
Sometimes it's needed to pass some parameters to a dependency. Let's look at an example. Let's consider we need to use glad in our project. It generates OpenGL headers using python script. The script uses a few parameters, so there is very much variants of package.
Possible Fix
Use something like parameters (or features like in Rust's cargo). For example:
Context
I need to use GLAD as a buckaroo dependency.
Your Environment