Closed bxrjmfh closed 6 months ago
Did you try integrating directly with the generate() function? This unit test may help: https://github.com/noamgat/lm-format-enforcer/blob/main/tests/test_transformerenforcer.py#L43
On Sun, Mar 17, 2024 at 6:01 AM Bxrjmfh @.***> wrote:
I would like to use the InstructBLIP https://huggingface.co/docs/transformers/v4.38.2/en/model_doc/instructblip#instructblip model from Hugging Face Transformers to perform question answering and generate the output in JSON format. Currently, the example code seems to rely on the pipeline to interact with the language model. However, it appears that loading a pipeline based on InstructBLIP is not possible.
My question is: Is there a way to bypass the pipeline and directly control the model's output?
— Reply to this email directly, view it on GitHub https://github.com/noamgat/lm-format-enforcer/issues/85, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKFA2AKR5BRNNOXCCAFOPLYYUIR3AVCNFSM6AAAAABEZ3XTHSVHI2DSMVQWIX3LMV43ASLTON2WKOZSGE4TANBWG4ZDIMA . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Did you try integrating directly with the generate() function? This unit test may help: https://github.com/noamgat/lm-format-enforcer/blob/main/tests/test_transformerenforcer.py#L43 … On Sun, Mar 17, 2024 at 6:01 AM Bxrjmfh @.> wrote: I would like to use the InstructBLIP https://huggingface.co/docs/transformers/v4.38.2/en/model_doc/instructblip#instructblip model from Hugging Face Transformers to perform question answering and generate the output in JSON format. Currently, the example code seems to rely on the pipeline to interact with the language model. However, it appears that loading a pipeline based on InstructBLIP is not possible. My question is: Is there a way to bypass the pipeline and directly control the model's output? — Reply to this email directly, view it on GitHub <#85>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKFA2AKR5BRNNOXCCAFOPLYYUIR3AVCNFSM6AAAAABEZ3XTHSVHI2DSMVQWIX3LMV43ASLTON2WKOZSGE4TANBWG4ZDIMA . You are receiving this because you are subscribed to this thread.Message ID: @.>
processor = InstructBlipProcessor.from_pretrained("/root/VLN_2023/temp/instructblip-vicuna-7b")
parser = JsonSchemaParser(AnswerFormat.schema())
prefix_function = build_transformers_prefix_allowed_tokens_fn(processor.tokenizer, parser)
I noticed that the processor inherits the tokenizer, so it can be used like this. Thank you!
I would like to use the InstructBLIP model from Hugging Face Transformers to perform question answering and generate the output in JSON format. Currently, the example code seems to rely on the pipeline to interact with the language model. However, it appears that loading a pipeline based on InstructBLIP is not possible.
My question is: Is there a way to bypass the pipeline and directly control the model's output?