Open guicho271828 opened 7 years ago
As I commented on Fare's website: https://fare.livejournal.com/146698.html?thread=640522#t640522
I had been always thinking we should not rely on hand-coded dependency management of compiling/loading files. The dependencies should be discovered automatically. (I don't discuss system-level dependencies at the point, only file-level, but the discussion below might extend to them.)
As you mentioned, CL has various processing phases and allows implicit dependencies caused by the side effects. This means that the static-analysis based approaches are doomed.
I rather think of a trial-and-error based, systematic approach for searching the (sequential or parallel) plan to process files. It should save the successful plan to a cache inside the directory (or within the asdf definition by overwriting the file). The result is easy to be distributed via quicklisp.
The search should start from the most common :serial t
mode and refine the partial order plan by merging some operations. Each trial should be run in a separate process since compiling and loading files alters the lisp image. The search should be anytime i.e. you can terminate the search at any time you feel satisfied and obtain the current best plan.
The files need no declaration nor additional manual tagging that specifies the dependency. Given the implicit side effects, I believe this is the only feasible approach to achieving a parallel build system in CL. In fact, parallelism always implies non-determinism, we should accept it. There are also various methods for reducing the failure rate. You can also reduce the parallelism to achieve more robustness.
Fare and rpgoldman needs younger lispers.