Closed jpstacey closed 10 years ago
The issue is that views includes several example modules, and so we use the replace link for those. We do this for any modules found inside the project, except that we explicitly exclude any path with 'test' in it because these are obviously not meant to be exported for consumption and some of them (in core) are testing the effects of non-viable .info files.
If we 'replace' something, then we must satisfy all of its dependencies.
I am open to a specification for things to exclude. One could argue that the example modules should have their dependencies satisfied if for no other reason than that the person who actually wants to try the example module would otherwise have no good way of installing the dependencies with composer.
If we have a few more tools to aid in a workflow like that of drupal/tangler, then it's reasonable to say that someone can just pluck out the examples modules into a separate drupal project, install everything, and try out the example. Until we have that, or their is actually a problem with satisfying conflicting dependencies, I'm inclined to just download more files because it's easy.
Well, I think there's a deeper issue at work here. Composer, in the context of Drupal, is effectively a tool for downloading Drupal projects. I think we can agree that the composer packages provided by the service map 1-to-1 onto Drupal projects!
Drupal provides two mechanisms for keeping track of dependencies
So I think this tool should be parsing makefiles (if they exist) and using them to determine project-to-project dependencies, and leaving .info files to the enabling phase. If a makefile doesn't exist, then that's arguably the responsibility of the project maintainer, and I don't think guessing project dependencies from all possible module dependencies is desirable behaviour: the best thing to do is to require the user to explicitly state all required projects, until a robust project dependency model is provided by drupal.org.
(I completely understand, btw, the desire to build a project-to-project dependency model. It'd be great to have a tool that did that. But I think it needs to come from a centralized resource and you shouldn't feel it's the responsibility of this service: ideally dependencies don't behave differently depending on how one sources the projects.)
If you use drush en
or drush dl
to fulfill your module dependencies, then you will not be able to manage the version constraints of those dependencies for your project.
This is why all of my projects source all of their contrib code by running composer install
in CI and production, staging, and local dev environments. This allows us to run composer update
to safely explore updates to contrib in local dev environments, commit the change, and verify that it passes the CI before merging into mainline for everyone else to work against.
I don't see how the proposed use of drush en
or drush dl
is better than this, but I would happily entertain an explanation.
So, clearly, I am going to keep parsing both .make and .info files to determine the require
links of a package. Clearly all of those things will need to be on disk for the application to work as expected at runtime. I am still open to a better specification for which .info and .make files to omit, though I also still believe that downloading more files (which end up being cached and reused between multiple projects on the same machine and not being invoked at runtime) is the easiest way to satisfy everyone.
To be clear, the purpose of the code in this repository is to offer convincing proof that all drupal projects on drupal.org should actually be on github with a composer.json file and there should be a packagist-like application that indexes them. The purpose of this code is to become irrelevant. In the meantime, I also get to use it at work to make life better while we wait for things to change.
Ah, I think I misunderstood the purpose of this service!
Steps to repeat
composer update
on the following:What should happen
What actually happens