Closed xjtupanda closed 1 year ago
I've figured it out myself. For those who might be interested:
sentences
should be context information similar to the "in-context examples".Hi, I encountered the same issue as you did. Could you please share an example of an annotation file?
Hi, I encountered the same issue as you did. Could you please share an example of an annotation file?
@zuwenqiang Take 'Spot the Difference' dataset as an example. You should:
@pufanyi
Hi, I encountered the same issue as you did. Could you please share an example of an annotation file?
@zuwenqiang Take 'Spot the Difference' dataset as an example. You should:
- download the corresponding official annotation files Link.
- modify the path of the corresponding annotation file in https://github.com/Luodian/Otter/blob/5e949c63ec38773fe639131bfcc800409172c495/mimic-it/syphus/datasets/change.py#L15
- follow the steps and run the script as in Link.
Thank you for your helpful response, the issue has been resolved now.
I'm trying to follow your great work and now trying to develope my own dataset. #202 has solved much of my questions, but I still have some confusion about implementation details, hope you could help me.
id
of_load_query_inputs
method should align with image ID and thesentences
should be context information similar to the "in-context examples" ?https://github.com/Luodian/Otter/blob/e7489a02d79e39e3e08fd983c72f2d7e6a30d622/mimic-it/syphus/prompts/spot_the_difference.json#L6 Could you share some annotation files so we can have a more straightforward understanding?Thanks in advance!