Open NightMachinery opened 2 months ago
hi @NightMachinery , our API does not support suffix
as a parameter. Can I ask how this is different from just passing in prompt = prompt + suffix
?
@orangetin Some models support fill-in-the-middle (FIM, infilling). E.g.,
Code Llama 2
<PRE> {prompt_prefix} <SUF>{prompt_suffix} <MID>
DeepSeek
<|fim▁begin|>def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[0]
left = []
right = []
<|fim▁hole|>
if arr[i] < pivot:
left.append(arr[i])
else:
right.append(arr[i])
return quick_sort(left) + [pivot] + quick_sort(right)<|fim▁end|>
StarCoder
<fim_prefix>before <fim_suffix> after<fim_middle>
The OpenAI way of doing FIM is passing the "suffix" as a suffix
parameter to the legacy completion endpoint.
Basically, normal completions only see the context before the completion. FIM also shows the context after the completion. For auto-complete models (like what Github Copilot does), showing the context after is vital.
Just appending the after context to the original prompt does not work.
Here is an example for better understanding. Suppose we want to complete <COMPLETE_HERE>
:
print("My name<COMPLETE_HERE>)
# Name: Armin Hajat
openai_text_complete(
model="gpt-3.5-turbo-instruct",
prompt=r"""
print("My name
""",
suffix=")\n# Name: Armin Hajat",
max_tokens=100,
temperature=0,
# stop=["\n"],
)
is Armin Hajat"
which when inserted instead of <COMPLETE_HERE>
would become:
print("My name is Armin Hajat")
# Name: Armin Hajat
thanks for the info! we will add this to our feature list
OpenAI's completion API has a
suffix
parameter that allows one to supply the context after the completion to enable fill-in-the-middle completions:This is not supported by together API: