Closed learning-chip closed 10 months ago
Yes, the logic of LEVEL=2 is slightly different from LEVEL>2, and I did not implement LEVEL=2 for convenience. Our algorithm is similar to Jacobi decoding when LEVEL=2(N=2), but the logic is still slightly different: Jacobi decoding includes decoding and verifying in one branch, and we put decoding and verifying into two different branches. You can try this repo https://github.com/teelinsan/parallel-decoding if you want to use the original Jacobi decoding.
I was wishing to compare speed ups between Jacobi decoding and LookAhead decoding, to better understand the trade-off between FLOPs and lower latency. Is it possible that in the future the case where LEVEL = 2 will be implemented in LADE?
Currently, I am not planning to implement it. (And LEVEL=2 will not be pure Jacobi decoding you may want). It is better to follow this repo to implement a Jacobi https://github.com/teelinsan/parallel-decoding .
The blog mentions that:
However such edge case seems not properly handled. Setting
lade.config_lade(LEVEL=2)
leads to error: