r2d4 / rellm

Exact structure out of any language model completion.
MIT License
501 stars 23 forks source link

How slow/fast is this method of calling generate() #11

Open RevanthRameshkumar opened 1 year ago

RevanthRameshkumar commented 1 year ago

I noticed one of the core parts of the strategy is to call generate one token at a time, but I was wondering how slow/fast this is compared to using the ConstrainedBeam search or something similar from HF. Also curious what the speedup might be of implementing in c++ vs via python wrapper. https://github.com/ggerganov/llama.cpp/pull/1773

I actually think your approach is better for my use case because there are many tweaks you can make even on the grammar sampling (as evidenced by the discussion in the above PR) ... but I am curious as to what the performance impact is.