I noticed one of the core parts of the strategy is to call generate one token at a time, but I was wondering how slow/fast this is compared to using the ConstrainedBeam search or something similar from HF.
Also curious what the speedup might be of implementing in c++ vs via python wrapper. https://github.com/ggerganov/llama.cpp/pull/1773
I actually think your approach is better for my use case because there are many tweaks you can make even on the grammar sampling (as evidenced by the discussion in the above PR) ... but I am curious as to what the performance impact is.
I noticed one of the core parts of the strategy is to call generate one token at a time, but I was wondering how slow/fast this is compared to using the ConstrainedBeam search or something similar from HF. Also curious what the speedup might be of implementing in c++ vs via python wrapper. https://github.com/ggerganov/llama.cpp/pull/1773
I actually think your approach is better for my use case because there are many tweaks you can make even on the grammar sampling (as evidenced by the discussion in the above PR) ... but I am curious as to what the performance impact is.