Open tebello-thejane opened 9 years ago
So, this is not an issue with stack, since it builds just fine when cabal is given a nudge.
Wrong. There is an issue in stack (and its dependencies): mono-traversable-0.4.0 has an upper bound on base "<5", while it does not compile with ghc>=7.8.*
Nonetheless i agree it looks strange that cabal chooses that version of mono-traversable. The behaviour can be observed on the install-plan for chunked-data-0.2.0. The diff of the plans without/with a manual constraint on mono-traversable is
> diff -y --suppress-common-lines <(cabal install --dry-run chunked-data) <(cabal install --dry-run chunked-data --constraint="mono-traversable>0.4.0")
> dlist-0.7.1.1
vector-0.11.0.0 | dlist-instances-0.1
> vector-0.10.12.3 (latest: 0.11.0.0)
> mwc-random-0.13.3.2
> vector-algorithms-0.7
mono-traversable-0.4.0 (latest: 0.9.2.1) | mono-traversable-0.9.2.1
It appears that cabal has to choose between non-latest vector
and non-latest mono-traversable
. And indeed:
> cabal install --dry-run chunked-data-0.2.0 --constraint="mono-traversable>0.4.0" --constraint="vector==0.11.0.0"
Resolving dependencies...
cabal: Could not resolve dependencies:
trying: chunked-data-0.2.0 (user goal)
trying: vector-0.11.0.0 (dependency of chunked-data-0.2.0)
trying: mono-traversable-0.9.2.1 (dependency of chunked-data-0.2.0)
next goal: vector-algorithms (dependency of mono-traversable-0.9.2.1)
rejecting: vector-algorithms-0.7, 0.6.0.4, 0.6.0.3, 0.6.0.2, 0.6.0.1
(conflict: vector==0.11.0.0, vector-algorithms => vector>=0.6 && <0.11)
rejecting: vector-algorithms-0.5.4.2, 0.5.4.1, 0.5.4, 0.5.3.1, 0.5.3, 0.5.2,
0.5.1, 0.5.0, 0.4, 0.3.4, 0.3.3, 0.3.2, 0.3.1, 0.3 (conflict: mono-traversable
=> vector-algorithms>=0.6)
Dependency tree exhaustively searched.
It seems to me that cabal
works correctly. Otherwise I would expect some reasoning about what is supposedly done wrong.
Also: cabal
, while simultaneously making such bare-bone problem reports. Does not seem fair for a free, open-source project.
Just a rant, please take it as such. It is a response the communities' attitude towards cabal in general, not this particular report.
@snoyberg maybe the best practical solution would be to x-revision the bound on base to <ghc-7.10 for mono-traversable<=0.4.0 ? (not that i am a big fan of x-revisions).
Ah, the vector-algorithms restrictive upper bound is indeed the problem, thanks for figuring that out:
$ cabal install --dry-run --package-db=clear --package-db=global chunked-data --constraint 'vector >= 0.11' --constraint 'mono-traversable == 0.9.2.1'
Resolving dependencies...
cabal: Could not resolve dependencies:
trying: chunked-data-0.2.0 (user goal)
trying: vector-0.11.0.0 (dependency of chunked-data-0.2.0)
trying: mono-traversable-0.9.2.1 (dependency of chunked-data-0.2.0)
next goal: vector-algorithms (dependency of mono-traversable-0.9.2.1)
rejecting: vector-algorithms-0.7, 0.6.0.4, 0.6.0.3, 0.6.0.2, 0.6.0.1
(conflict: vector==0.11.0.0, vector-algorithms => vector>=0.6 && <0.11)
rejecting: vector-algorithms-0.5.4.2, 0.5.4.1, 0.5.4, 0.5.3.1, 0.5.3, 0.5.2,
0.5.1, 0.5.0, 0.4, 0.3.4, 0.3.3, 0.3.2, 0.3.1, 0.3 (conflict: mono-traversable
=> vector-algorithms>=0.6)
Dependency tree exhaustively searched.
Revisions published. I can't really test this though since the problem is non-deterministic (or at least appears to be, since my cabal install
commands initially worked).
When i now do cabal install --dry-run chunked-data
, it chooses mono-traversable-0.3.1 unless i have vector-0.10.12.3 installed in the sandbox. Even with the --package-db=clear --package-db=global
. This kind of resolver cleverness is horrible :/
How about now? I added some preferred version info and a lower bound in chunked-data.
@tebello-thejane I agree with your criticism of what i seem to expect of end-users; that is asked too much. But then you should admit your limited knowledge and ask where the problem might lie, not make a bold claim "So, this is not an issue with stack [..]". There is a big difference in tone between "there is an error in this tool" and "i wish this command succeeded; please help".
Also i suggest you mention how much time you invested into trying to diagnose the issue.
I am aware that it was @snoyberg who first claimed a fault in cabal
:)
@snoyberg seems all fine now :)
(stack
seemed to resolve to the right plan even before the latest fix; should have mentioned. my point was about the sandbox being a potential cause for the non-determinism).
Looks like it wasn't a solver bug after all, so closing.
@lspitzner Thanks for diagnosing the cause of the error.
I'm not going to go much deeper on this... but equating "technically speaking this is doing something that can be justified even though no one would ever want it to behave this way" with "not a bug" has been a large source of frustration with cabal. The fact that the dependency solver chose an ancient version of mono-traversable instead of a recent version of vector, when release dates would provide clear indication that the other way around would be more likely to succeed, is the kind of technically-correct-yet-practically-wrong thing that causes users grief constantly.
@snoyberg You have a point. Not sure, though, if release dates info is currently available to cabal-install
.
@kosmikus, do you think something can be done in the solver to prevent such errors in the future?
@23Skidoo I've investigated that before, and yes, release dates are available in the form of timestamps on the package descriptions in the index tar.
@snoyberg
equating [technically correct] with [not a bug] has been a large source of frustration with cabal
cabal
is a framework/software. it does not equate. who is the indented recipient of this message? I? The cabal maintainers? If the former, i quote from my initial reply:
It seems to me that cabal works correctly. Otherwise I would expect some reasoning about what is supposedly done wrong.
So I never said that this definitely is not a bug/that the implementation could not be improved to handle such a situation better. I left room for more concrete input. The criticism seems inappropriate.
And regardless of interpretation, such vague criticism is not constructive.
@lspitzner Please look at the context of my comment, it was clearly not pointed at you, nor was it a vague criticism, but an actual suggestion for a change in approach to which issues are treated by the cabal team as bugs and which aren't. In fact, the rest of my comment was very concrete on how I think cabal could do better here. I'm actually not sure how someone who read my entire comment could possibly interpret it as a vague criticism.
@ttuegel IIRC the release dates are not reliably available in the 00-index tarball, as they get shadowed by cabal edits
PS: I think I've stumbled over a couple of other pitfalls you'd run into by relying on timestamps for the solver, but I need to think it through a bit more... I've been toying with the idea of using timestamps for guiding supervised .cabal
edits, as that's the one place where I can see that information being useful as a first-order approximation of the build-depends
-truth
@tebello-thejane for future reference, this sounds a lot like the kind of issues the Hackage Trustee issue tracker over at https://github.com/haskell-infra/hackage-trustees/issues is for as this isn't cabal's fault (even though I see a point that the cabal
UI could be improved to make it more clear what the problem is) but rather the result of package authors omitting proper version bounds
@23Skidoo You could do me and others who occasionally work on the solver a huge favour if you'd turn the enhancement request to use package timestamps for the solver into a new issue. It's difficult to figure out what's actually the reason why this is still open from a bug with a lot of discussion, where the important info is buried deep inside a few comments. The new bug can of course reference this one.
I was not aware of the Trustee issue tracker, @hvr. Thanks for the link.
@ttuegel IIRC the release dates are not reliably available in the 00-index tarball, as they get shadowed by cabal edits
@hvr You're quite right! I forgot momentarily that tar
only saves last modification time and not creation time. I think there has been talk of using the package index as append-only; at least this would let us back out the creation time, albeit at the cost of traversing the whole archive. The truth is, tar
is being pushed pretty far as a database format here.
PS: I think I've stumbled over a couple of other pitfalls you'd run into by relying on timestamps for the solver, but I need to think it through a bit more... I've been toying with the idea of using timestamps for guiding supervised .cabal edits, as that's the one place where I can see that information being useful as a first-order approximation of the build-depends-truth
I don't think the idea here is to rely on the timestamps in general, but only to use them to prioritize conflicting constraints; i.e. prefer constraints that rule out older packages to constraints that rule out newer ones.
I think there has been talk of using the package index as append-only; at least this would let us back out the creation time, albeit at the cost of traversing the whole archive.
It needs to be traversed anyway when creating the index cache; the upload date could be saved in the cache as well.
It needs to be traversed anyway when creating the index cache; the upload date could be saved in the cache as well.
Yes, but if the archive is append only, it becomes much larger, right? By a factor of the average number of package revisions.
Right, with a persistent package-index this should work again, assuming the history won't be truncated.
Btw, can we please create a new focused ticket like @kosmikus suggested, and more importantly specify exactly how the timestamps are to affect the solving process? That'd make it easier to reason about the semantics and test it on paper by trying to come up with counter-examples.
@ttuegel some numbers:
We currently have 8565 packages, 58397 package-versions, 4574 of those package-versions have a non-zero x-rev, and in total there were 5543 cabal edits so far.
Right now if we created an uncompressed index-tarball with full history, it'd not be "much larger" but rather just roughly 10% larger. Compressed with an algorithm with a large enough dictionary and/or window-size should be able to bring the relative 10% increase down a bit.
Trying to install the latest
stack-0.1.3.0
causes the following failure:However, specifying a constraint manually via
cabal install --constraint 'mono-traversable >= 0.9' stack
succeeds.So, this is not an issue with
stack
, since it builds just fine whencabal
is given a nudge.