Closed madmann91 closed 5 years ago
The source of the problem is as follows: The function count
is specialized by PE for both pred
and the return continuation. Because of this, the tail call optimization cannot be applied and the mangler will create calls to the old continuation inside the dropped continuation, which leads to divergence. To solve this, the function count
should not be specialized for its return continuation. Will look into this.
The culprit is FnType::is_returning
. It should return true
if there is only one continuation of order 1. Orders different than 1 should simply be ignored. Currently, if there is a higher-order function, it returns false
.
@leissa : OK with this change?
On top of this, we also need to change the inliner and run the lower2cff phase separately (otherwise PE will peel loops). See pull request #94.
Sorry for joining too late but I was on vacation. I guess your suggested change regarding FnType::is_returning
works because of your closure conversion?
No, there is actually no need for closure conversion here. I just run the PE in lower2cff mode later on. The core issue here is that PE & lower2cff are done together, meaning that the tail call opt cannot work, even though there is no problem with that function after specializing for the predicate.
In other words, what you want to do in the example is specialize count
for pred
, but not for the return
continuation (if you do that you'll end up inlining indefinitely, obviously). Which is why I think it is best to first run PE as specified by the user, and then run an additional lower2cff phase that specializes problematic functions.
Fixed after merging pull request #94
The following code makes the partial evaluator diverge. Annotating the
pred
parameter with@(true)
does not solve the problem.