The aim of this PR is to explore a possible solution to the topic of prompt alignment. I decided to open a different draft PR as the option considered here is quite different from what I had in mind in #531
I'm trying to find a solution that:
does not modify the states_to_token_maps after initialization
does not modify the get_next_instruction and get_next_state methods
could be easily activated/deactivated by the end user
For this test I've looked only at the RegexGuide and have not covered some things that will need to be added later as I want to focus on the idea of changing the states_to_token_maps.
To do so, the idea is to create during the initialization of the RegexGuide a states_to_token_maps that could accommodate token alignment for any prompt that will be received later by including states that are "before" the initial state (anticipating that some characters at the end of the prompt will be removed for token alignment).
The downside of this approach is that it adds many states to the states_to_token_maps that are used/valid only for a given prompt. Thus, when running an inference, some of the states cannot be reached for the prompt provided, which sounds a bit strange.
Another problem is that it adds overhead to the initialization of the generator. The amount of overhead varies a lot based on the size of the vocabulary and on how "constrained" the first characters of the regex are (worst case scenario is unconstrained text). I have not really looked at how I could optimize how long those operations take though, so maybe it could be reduced.
I would be curious to have your opinion on this @rlouf
The aim of this PR is to explore a possible solution to the topic of prompt alignment. I decided to open a different draft PR as the option considered here is quite different from what I had in mind in #531
I'm trying to find a solution that:
states_to_token_maps
after initializationget_next_instruction
andget_next_state
methodsFor this test I've looked only at the
RegexGuide
and have not covered some things that will need to be added later as I want to focus on the idea of changing thestates_to_token_maps
.To do so, the idea is to create during the initialization of the
RegexGuide
astates_to_token_maps
that could accommodate token alignment for any prompt that will be received later by including states that are "before" the initial state (anticipating that some characters at the end of the prompt will be removed for token alignment).The downside of this approach is that it adds many states to the
states_to_token_maps
that are used/valid only for a given prompt. Thus, when running an inference, some of the states cannot be reached for the prompt provided, which sounds a bit strange.Another problem is that it adds overhead to the initialization of the generator. The amount of overhead varies a lot based on the size of the vocabulary and on how "constrained" the first characters of the regex are (worst case scenario is unconstrained text). I have not really looked at how I could optimize how long those operations take though, so maybe it could be reduced.
I would be curious to have your opinion on this @rlouf