The file demo _glip_sam.py works rather well, but demo vlpart_sam.py does not, except for some special cases, as the one used in the example (assets/twodogs.jpeg with prompt "dog head"). Usually no object is found and displayed in the output.
Are there any updates planned or is this project discontinued?
As a further note: Even in the demo_glip_sam.py there are sometimes problems with prompts repeating the same word, e.g. the prompt "person head, vase, person", because the function run_ner(self, caption) in predictor_glip.py uses a global find in the prompt, and thus may mix up the correct position of the entity in the input. This then leads to an insertion of the "object" token and a wrong labeling of boxes in the output.
The file demo _glip_sam.py works rather well, but demo vlpart_sam.py does not, except for some special cases, as the one used in the example (assets/twodogs.jpeg with prompt "dog head"). Usually no object is found and displayed in the output.
Are there any updates planned or is this project discontinued?
As a further note: Even in the demo_glip_sam.py there are sometimes problems with prompts repeating the same word, e.g. the prompt "person head, vase, person", because the function run_ner(self, caption) in predictor_glip.py uses a global find in the prompt, and thus may mix up the correct position of the entity in the input. This then leads to an insertion of the "object" token and a wrong labeling of boxes in the output.