Open senlin-ali opened 1 year ago
Do you mean fine-tune with point prompt? Most likely yes, but would probably need to change the dataset format.
yes , i try myself, but i find it gets worse when tuning decoder only with point prompt , so i want to try your codebase
i have another question, you use the bounding box prompt to train , do you use box prompt when val ?
It would be nice to have generated points and scribble positional embeddings in fine-tuning.
when i change the prompt encoder False in config.py , i got error:
runtime error: expected to have finished reduction in the prior iteration before starting a new one
do you try this , and is this work
What do you mean?
@senlin-ali I see. Actually, I've only had time to test the mask decoder fine-tuning. Can you post the full traceback for prompt encoder false?
do you try this , and is this work
What do you mean?
Have you tried point prompt? Does it work?
@senlin-ali I see. Actually, I've only had time to test the mask decoder fine-tuning. Can you post the full traceback for prompt encoder false? File "/data/anaconda3/envs/A100/lib/python3.9/site-packages/lightning/fabric/wrappers.py", line 110, in forward output = self._forward_module(*args, *kwargs) File "/data/anaconda3/envs/A100/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(input, **kwargs) File "/data/anaconda3/envs/A100/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 994, in forward if torch.is_grad_enabled() and self.reducer._rebuild_buckets(): RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument
find_unused_parameters=True
totorch.nn.parallel.DistributedDataParallel
, and by making sure allforward
function outputs participate in calculating loss. If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module'sforward
function. Please include the loss function and the structure of the return value offorward
of your module when reporting this issue (e.g. list, dict, iterable). Parameter indices which did not receive grad for rank 3: 0 1 4 5 6 7 8 9 10 11 12 13 14 In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error
Have you tried point prompt? Does it work?
Not but I think it could work as with the mask. it could be useful to have here a point generator with the policy described at page 19 of the SAM paper ("Point sampling" section)
Have you tried point prompt? Does it work?
Not but I think it could work as with the mask. it could be useful to have here a point generator with the policy described at page 19 of the SAM paper ("Point sampling" section)
thanks , i will see the details in the paper
@senlin-ali I see. Actually, I've only had time to test the mask decoder fine-tuning. Can you post the full traceback for prompt encoder false?
@luca-medeiros
i have same error ... have we fix this bug
File "/data/anaconda3/envs/A100/lib/python3.9/site-packages/lightning/fabric/wrappers.py", line 110, in forward output = self._forward_module(*args, *kwargs) File "/data/anaconda3/envs/A100/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(input, **kwargs) File "/data/anaconda3/envs/A100/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 994, in forward if torch.is_grad_enabled() and self.reducer._rebuild_buckets(): RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel, and by making sure all forward function outputs participate in calculating loss. If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable). Parameter indices which did not receive grad for rank 3: 0 1 4 5 6 7 8 9 10 11 12 13 14 In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error
@luca-medeiros
The reason why the parameters of the prompt_encoder cannot participate in training and have no gradients is a bit strange.
yes , i try myself, but i find it gets worse when tuning decoder only with point prompt , so i want to try your codebase
Hello, I also implemented this on my end and encountered this issue. My loss not been able to decrease, and the miou is significantly low, resulting in bad mask visualization. May I ask the follow questions? 1.Did you also only use the center point as prompt during point prompt finetune? 2.Can the loss be reduced?
if this can be converted to point prompt