Currently, VLMModule and OpenAIModule already have the feature to process the few-shot examples for each inference. However, OpenXModule does not have any feature for it.
The remaining action items are:
Finishing the implementation of setting k_shots_examples in _process_batch function.
Adding at least two unit test cases.
Checking whether the chat history put into the API is correct via end-to-end testing.
Currently,
VLMModule
andOpenAIModule
already have the feature to process the few-shot examples for each inference. However,OpenXModule
does not have any feature for it.The remaining action items are:
k_shots_examples
in_process_batch
function.