Closed ghost closed 6 years ago
I though that what over-engineer at first but it could simplify things in the end, and add some solution in some (contrite) case.
Do you really want to generate files when initializing our findlib implementation? Or is it just for giving the semantic?
Yh I really want to generate files. Technically they wouldn't be generated during the findlib initialization. After #370 is merged, we would simply generate META files for all private libraries using almost the same code as the current one, and we would just modify findlib.ml to use Build
, etc...
This has pretty much been implemented in #516
This ticket propose a way to change the way we resolve library names such that given a search path of the form
d_1:...:d_n
, libraries living ind_k
can only seed_k:...:d_n
. This would allow to support things that are currently not possible.Let's consider the following scenerio:
a
b
: depends ona
c
: depends onb
a
When resolving library names, jbuilder will always look for internal libraries first. So when computing the transitive closure of
c
, this will yield the following result:int/a, ext/b, int/c
Where
int/x
means libraryx
found in the workspace andext/x
means libraryx
found in the installed world. Jbuilder will error out on such case, this is becauseext/b
was built againstext/a
and there is no guarantee thatint/a
andext/a
are compatible, so allowing it would very likely result in hard to debug "inconsistent assumption" errors.However, resolving the transitive closure of
c
as follow would work:ext/a, ext/b, int/c
.This case happens for instance when using
[@@deriving_inline ...]
with Base. A user trying to lint Base would need ppx_core, which depends on Base. It's likely that ppx_core will be part of the installed world but not present in the workspace.What follows is a proposal to changes the way jbuilder resolve names to get the second result for the transitive closure of
c
. It is also a generalization to n levels rather than just internal/external, which would allow to not have internal/external libraries anymore in jbuilder. In particular we wouldn't need 2 different representation of library dependencies and two different library name resolvers. This would simplify the code.Name resolvers with n levels
We take the following API as input:
Now let's consider of a list of n databases
db_1
, ...,db_n
. Typically this would correspond to a findlib search path composed of n directories.When resolving a name, we start by looking it up in
db_1
, then if not found indb_2
, etc... until we find it indb_k
. When resolving the dependencies of the resulting value, we do the same except that we start fromdb_k
, thus ignoringdb_1
, ...,db_k-1
. We continue the process recursively until we reach values that have no dependencies.In the resulting set of value, it is an error if we have two values
x
andy
such thatname x = name y && database x <> database y
. This would typically correspond to trying to link two incompatible versions of the same compilation unit, which is not allowed.Application to jbuilder
We rewrite the findlib implementation to use this mechanism, which would be implemented as a functor. We also get rid of
Lib_db
which implements the lookup of internal libraries and theLib
module as well which is a compatibility layer on top of internal/external libraries. Instead we generate META files for all private libraries in<project-root>/.libs
, and inside<project>
we would use the following findlib search path:<project-root>/.libs:_build/install/<context>/lib:<external-findlib-path>
.This would simplify the code, and this mechanism could probably be reused for other languages when we have plugins.
It changes the interpretation we do of the findlib path, but I'm pretty sure that this wouldn't be an issue in practice since this is naturally how things are installed on the system. At worse we could only consider three levels:
<project-root>/.libs
_build/install/<context>/lib
<external-findlib-path>
which wouldn't break anything.