Open aspiers opened 10 years ago
Well, the way el-get is designed, dependencies have to be installed first, so when el-get is deciding what the dependencies of a package are, it doesn't yet have access to the package's source. I don't think this issue is fixable without a rewrite. On a related note, I will try to address this issue in my rewrite.
Ah of course, it's chicken and egg :) I thought there was probably something I was missing.
However, this is identical to the problem that any other package manager (zypper
, yum
, apt
etc.) faces, since these tools all need access to the dependency metadata of not yet installed packages - and they all solve it with a tool which automatically extracts the dependencies from the packages into separate files containing repository metadata.
So why do you say a rewrite would be required? I would have thought it should be reasonably easy to extend the existing code to provide a new el-get-update-recipe-dependencies-from-source
which downloads either a specified packages or all of them, and for each package extracts the dependencies from the header and updates the recipe accordingly. Then the person invoking could git commit, push, and send a pull request.
Of course this would take a lot of time and bandwidth when run on all packages, but even doing it once a week should vastly improve the reliability of the recipes' dependency metadata. And by allowing it to run on a subset of packages enables a divide and conquer approach - it would be very easy for people to co-maintain the dependencies for their favourite list of packages in the peer-to-peer fashion which el-get so nicely implements already. We could even have client-side hooks which spot when the dependency headers are changed, and automatically update the developer's local copy of the corresponding .rcp
file. So all they would have to do is remember to send a pull request to el-get
after changing the dependencies.
I've been looking at this. When installing a package we do download+unpack+build+init
dependencies, download+unpack+build+init
the package. To get auto dependency we would need to do download+unpack
the package, check Package-Requires
, download+unpack+build+init
dependencies, then build+init
the package.
So the basic problem is that download+unpack+build+init
is all one step. A package can be "Required"
or "Installed"
but there is no "Downloaded"
state.
I would have thought it should be reasonably easy to extend the existing code to provide a new el-get-update-recipe-dependencies-from-source which downloads either a specified packages or all of them, and for each package extracts the dependencies from the header and updates the recipe accordingly.
So there isn't a way to just "download ... a specified package" without installing, and we can't install without knowing the correct dependencies.
I have just created #2212 which adds el-get-auto-update-dependencies
, this won't install anything, but can be used to update dependencies on already installed packages. A more complete solution will have to wait until we can break down installation into smaller steps.
@npostavs Thanks a lot for looking into this and reporting back! What you wrote makes perfect sense. Hopefully we can break it into smaller steps in the not too distant future.
There's an additional problem that Package-Requires
may refer to different package names than those that el-get knows about. For example, magit is split into magit
, magit-popup
, and with-editor
. So we'll have to maintain a mapping (possibly we can derive it from MELPA recipes).
Maybe I'm missing something, but it looks to me like
el-get
does not respect the contents of thePackage-Requires
header within the package's main.el
file. For example, I just didM-x el-get-install pkg-info
and it failed to loadepl
despitepkg-info.el
containing the following:If I do
M-x el-get-find-recipe-file pkg-info
then I getand
M-x el-get-describe pkg-info
gives:so clearly
el-get
thinks it depends ons
when in actual fact it depends ondash
andepl
.Again, I may be misunderstanding something here, but if I'm right, this is a fundamentally broken approach, because upstream package dependencies will naturally change on a regular basis, so any model which violates DRY and expects the dependencies to be updated in more than one place will inevitably lead to frequent breakage. Sure, there may be occasions where it is necessary to model dependencies in the recipe in addition to those in the package header (e.g. if upstream fails to correctly model the dependencies), but this is not a good reason to fail to honour the package header.