Open michelbotros opened 8 months ago
I am experiencing the same thing, just between the online demo and the workflow showed in predictor_example.ipynb on my own images
Yeah same here
The primary reason for the variation in masks between the online demo and the local notebook is that the online demo uses a different model for "segment everything" functionality. The online demo is powered by the model hosted at the following endpoint:
在线演示和本地笔记本之间掩码差异的主要原因是,在线演示使用不同的模型来实现“分割所有内容”功能。在线演示由托管在以下终端节点的模型提供支持:
The link you provided cannot be opened, why
Indeed, the effect of online demo is obviously better than local, why?
There is no such thing as free lunch, the best tuned model is of course unavailable.
在线演示和本地笔记本之间掩码差异的主要原因是,在线演示使用不同的模型来实现“分割所有内容”功能。在线演示由托管在以下终端节点的模型提供支持: segment_everything_box_model
The link you provided cannot be opened, why
The link I provided is an API reference URL that I identified by inspecting and analyzing the web application's code. The online demo model named segment_everything_box_model
is hosted on private servers of meta and isn't publicly accessible. Online demos interact with this private model via API calls.
Hi,
I have been trying out SAM in the online demo and it works great using the "Everything" button:
Image:
Result demo:
I then tried to recreate these results with the code using the SamAutomaticMaskGenerator and a vit-h (default) model. See code:
However,` I get a different result. Large portion of the background is segmented and there is also many more small regions detected.
Result code:
For my application the demo result is preferred. If anyone knows how to recreate the demo results, it would be greatly appreciated if its shared!