Open zhuchenye opened 1 year ago
Aren't they releasing their own promptng themselves? It was in testing on their web demo.
Have you seen this @moorehousew ?
I haven't seen this, but it looks promising.
Looking at the repository, the code we'd be interested in is located in grounded_sam_demo.py.
I haven't seen this, but it looks promising.
Looking at the repository, the code we'd be interested in is located in grounded_sam_demo.py.
I'm not too familiar with this stuff, but it looks like it would need the grounded models (repo etc) and some wrappers made out of a few functions found in the file you linked (mask extraction nodes and for the main get_grounding_output method)
I definitely shy away when it comes to tensor stuff. I always break them or get mismatches lol
still confused why ComfyUI is using masks the way it is instead of masks in same format as tensors so you can apply masking outside of sampling. I want a mask for my latent noise injection. Lol
question, the SAM that was implemented was it the original release or the SAM-HQ?
question, the SAM that was implemented was it the original release or the SAM-HQ?
It's Meta's SAM, not a fork/extension etc. Been looking at SAM-HQ but don't like the setup it has currently. I'd like them to put together a proper package for pypi
Since the SAM model already implemented, we can use text prompts to segment the image with GroundingDINO.