Closed Seldaek closed 8 years ago
This syntax could be useful, but I see an issue: the proposed syntax allows having only one set of alternatives in your requirements
Why that? You can have alternatives inside alternatives, and you can have multiple requirements inside one too. Anyway it's not clear yet how we can achieve this so syntax is only to express the idea at this point.
well, alternatives
cannot be used twice as key in the array. Or is it just an arbitrary key ?
It could be arbitrary, but I also don't see why you'd need it twice?
Erm yeah ok, nevermind that comment. Anyway I'm discussing with @naderman to find a viable solution.
Basically this is just a way of specifying "provides" for packages outside of their definition. As what this comes down to, is ext-phpredis and predis/predis providing the same virtual package. So just need to generate a fake package identifier and add the provides to those packages dynamically.
Updated the issue description with the plan, so it's written down.
What if there are several alternatives or some of alternatives contains several packages? somthing like: feature "template rendeering" can be provided by "twig/twig" or by "zf/zend_view" + "zf/zend_form" or by "selfmade/framework" Example:
{
"require": {
"feature-templates": [
[
"twig/twig": "1.8.*"
],
[
"zf/zend_view": "1.1",
"zf/zend_form": "2.0"
],
[
"selfmade/framework": "*"
]
]
}
}
That's actually a good point, the provide/require trick won't work for alternatives that contain more than one package as far as I can see. Unless we really hack things a lot and say that zf/zend_view:1.1 requires zf/zend_form:2.0 and vice versa, to make sure they come together or not at all, but that would most likely mess things up pretty quickly.
@Seldaek adding the provide in the other package would probably be a mess too as you would have to hack in the loading of the package. But it could be implemented by defining a meta package (so without code) providing the virtual package and requiring the packages specified in the alternatives. This will simplify the implementation (no need to find all packages matching the constraint to add the provide in them) and will support multiple packages (the meta package can have several requirements)
Yup that's a good idea. Adding the provide is not too hard though (well, once some planned refactorings are done;).
@naderman alternatives and provides are different beasts.
What if I have an app that can run on oracle or mysql, because it has an adapter for both, but not on postgres or sqllite. No one sane of mind would provide a "virtual package" grouping together just mysql and oracle.
The other piece missing to the puzzle here is recursion. Basically every requirement is for a set of items, which can be joined by "and" or "or". And every item can itself be a set.
The logic to do the solving might get somewhat complex, I reckon, but it would be useful. I have real-life cases I can bring as example if needed
"virtual package" can be built implicit. Ex.
requirements :=
"require": { <<list of packageDef>>
}
packageDef :=
"vendor/feature": "versionConstaint" |
"any-feature": { <<list of packageDef>>
} |
"all-feature": { <<list of packageDef>>
}
This way example with templates would be rewritten as:
{
"require": {
"any-templater": {
"twig/twig": "1.8.*",
"all-zend-templater": {
"zf/zend_view": "1.1",
"zf/zend_form": "2.0"
},
"selfmade/framework": "*"
}
}
}
and there would be two "virtual packages": "templater", and "zend-templater"
That's not quite it. A virtual package is a package that does not exist. You can do that if it's only one level. If you need more than one level you need an existing meta package with its own requirements. This can still be generated automatically, and I think that's the way to go.
After play with xdebug all this afternoon and rethink to this problem and as composer could create a map file for dependancies, i demand me if the structure could/should be a little more complex. The map file is different for each environnement, by example i use behat, and behat isn't require for each environnement in symfony only for the test, but as the requirements take place now each class from behat would be tested before find a good class, and this is the same for the acme bundle. The order of the class map is alphabetic.
In the case tested with few bundles, i have 2890 lines in the autoload_classmap.php file, average the same amout of entries in the array. After have "profiled" my symfony env in dev mode, i have 894 call to the composer class loader. If env aren't separate, each of there request take a little more time to find the good bundle.
So i demand me if the require could be
{
"require": {
//shorcut for all env
"all": {
"templates": [
{"twig/twig": "1.8.*"},
{
"zf/zend_view": "1.1",
"zf/zend_form": "2.0"
}
],
"symfony/monolog-bundle": "2.1.*",
},
//separation by 2 env
"test, dev": {
"doctrine/doctrine-fixtures-bundle": "*",
},
//separation by env
"dev": {
"sensio/distribution-bundle": "2.1.*",
"sensio/generator-bundle": "2.1.*",
},
//separation by env
"test": {
"behat/symfony2-extension": "*",
"behat/mink-extension": "*",
"behat/mink-browserkit-driver": "*",
"behat/mink-selenium2-driver": "*",
}
}
}
Another solution could be to write the map in the same order than in the json file to kick the test dependancies at the end of the array, even if i'm not sure than the array be iterate in this order by the php engine.
To compare to the bundle install command present in rails, the conf is written as this :
group :development, :test do
gem 'ruby-debug19', :require => 'ruby-debug'
#...
end
group :test do
gem 'cucumber-rails'
gem 'cucumber-rails-training-wheels'
#...
end
The #965 issue give another approach : creating a require-test. I see the utility to create it, as by exemple launch the test suite on a production server without dev utilities (the generator-bundle by example), but this seems to me more usefull to exands the require definition.
After make a test i think symfony and all package using composer could win speed with a precise separation on the requirements. I made test wich win as an average 1% per request on a mapfile with all test. You can see the result here.
Another improvement could be to not order class by name and lets the most often called class at the beginning of the file. We could to imagine a number pertting to place the librarry, but i think this could be not clear.
@nicodmf Separate envs have nothing to do in this discussion. The goal here is to allow a package to require " A or B", because it can work with both but needs at least one of them.
@stof, i know, but this issue is too related to the redesign of the require key, and with this consideration, the env can/must take place here to avoid side effects (by example a wrong definition of the default env for require, parameters of the parsing function, default value....), and honnestly, i don't want create too much issues.
@Seldaek to avoid a BC become from my last proposal, it could be possible to add a key (named libraries or requirements..) with the env parameters. and define the require as an all env deprecated for on or two versions before extinction.
@nicodmf I don't get what you are trying to achieve here. First of all your benchmark is way below statistical relevance. And second if you don't want dev packages to slow down your production server, put them in require-dev and install without --dev on your production machines. Then they won't be in the classmap.
A variant of this would be extremely to reduce inter-package dependencies:
Declare compatibility with an abstraction:
{
"name": "monolog/monolog",
"compatible": {
"psr/log": "1.0.0"
}
}
Depend on an abstraction:
{
"name": "somevendor/somepackage",
"require": {
"psr/log@compatible": "1.0.0"
}
}
In this case psr/log is a package, but it need not be. It could be any standard dictating an interface. The issue I see though is you either need blind faith in the ecosystem or still be explicit at some point:
{
"name": "myvendor/mypackage",
"require": {
"somevendor/somepackage": "dev-someversion",
"monolog/monolog": "1.6.*@dev"
}
}
Still pondering whether that's actually an issue with real-world significance as resolutions are "locked" before testing and deployment anyway. So probably not. Should the end-client end up with an undesired resolution there's nothing stopping one from being explicit (although publishing for reuse would obviously be discouraged). The benefits to the ecosystem could be significant I believe.
Maybe this should be a separate feature request?
I've considered doing something like this https://gist.github.com/slbmeh/5809882 before.
What about creating an alternative
package type. They can be handled like meta-packages, but instead of requiring all packages only require one. Maybe have the schema for an alternative package type generate something like my package repository structure.
@448191 the problem with packages declaring compatibility with abstractions is that it forces 3rd party repos to agree on abstractions. My usecase is: my app supports oracle, mysql or postgres. One of the 3 is needed. I surely can not invent a "my-app-db-connector" interface and force the 3 php extensions mentioned to declare that they adhere to that... (note: same comment as I posted "more than 1 year ago"!)
@gggeek: true. But you might publish both your adapter and abstraction as packages and depend on the abstraction. Anyone using your adapter would propagate distribution of the abstraction (provided they correctly depend on the abstraction, something Composer might enforce), alternative implementations might just spring up. To stick with your example: you might not feel like publishing an SQLite adapter. But someone else might. You get SQLite support for your efforts, they get the 3 other adapters. Everybody wins.
I don't think this is a new concept.
@slbmeh: the difference with providing alternatives is that you'd have to be explicit. The whole point is to have the dependency resolution open ended.
@448191 If you want to put each adapter in its own package with a simple requirement for each of them, this is already supported by Composer. It is what provide
is about.
Question: Has this discussion stalled? :)
@shehi appears that way, I've been using the provide method for nearly a year now and it seems to be sufficient for the minimal amount of use cases I've encountered.
Any update on this?
Looking for way to specify either ext-gd
, ext-imagick
, or ext-gmagick
is installed (yes, so in our case only need this for system requirements)
@andrerom AFAIK you can solve that by taking your variably dependent logic and segregating it into a small package that provides a virtual package which your larger project depends on.
{
"name": "my/gd-impl",
"require": {
"ext-gd": "*"
},
"provide": {
"my/image-processor": "1.0.0"
}
}
{
"name": "my/imagick-impl",
"require": {
"ext-imagick": "*"
},
"provide": {
"my/image-processor": "1.0.0"
}
}
{
"name": "my/gmagick-impl",
"require": {
"ext-gmagick": "*"
},
"provide": {
"my/image-processor": "1.0.0"
}
}
{
"name": "my/awesome-project",
"require": {
"my/image-processor": "^1.0.0"
}
}
would work, but then I basically need to setup 3 repos with packagist, just for one composer.json file in each of them then. Unless I manage to convince Imagine* maintainers to do this where it makes sense, in their setup, where the actual drivers for these backends exists.
* And same can be said about Doctrine, Stash and others that has requirements on php modules/ext.
@andrerom I usually approach it by abstracting the driver that works with different extensions into a separate lib... then i don't need to care which one they picked and just expect my interface to be implemented...
Sure, but I'm not the maintainer of those libs, so that would be completely up to them if they would appreciate splitting out their code.
@andrerom that isn't what I meant... I break out the piece of my code that works with the extension or dependency. Example being I have an API client that can work with guzzle 3 or guzzle 5... i took the variable code and created an interface... then i wrote two micro libs responsible for handling the pieces directly with guzzle and injected them into my project with an autoload file config.
IMO a cleaner approach. Rather than having all of the code for imagick, gmagick, and gd in your project when it may only be using one at a time you put that with the composer.json file that depends on the extension you are interacting with.
As written in #2940 I think this is unlikely to happen so I'm closing this now. There are ways to do this as stated above, and the rest of the most complicated checks can be left to runtime checks IMO, if we try to build all this into composer it might end up just adding more complexity for little apparent benefit.
Example:
The package would then have a require on
redis-some-hash-of-the-array: 1.0.0
or the like. All packages matching the alternative requirements would have a provide added withredis-some-hash-of-the-array: 1.0.0
. That should solve it pretty easily.