ByungKwanLee / Causal-Unsupervised-Segmentation

Official PyTorch Implementation code for realizing the technical part of Causal Unsupervised Semantic sEgmentation (CAUSE) to improve performance of unsupervised semantic segmentation. (Under Review)
11 stars 1 forks source link

Questions related to the loss maximizing modularity #2

Closed JizeCao closed 1 year ago

JizeCao commented 1 year ago

Is the modularity loss used in the paper equivalent to the modularity loss proposed in ACSeg? I didn't work fully in the math but it seems that these two losses are really similar.

ByungKwanLee commented 1 year ago

Yes, we similarly use the formulation of modularity loss used in ACSeg, but we definitely use it for a different purpose. ACSeg computes the modularity between DINO features (adjaciency matrix) and the output of concept queries through multiple cross- and self-attention structures, where the number of concept queries is set to 5.

However, CAUSE computes the modularity between DINO features (adjaciency matrix) and Concept Clusterbook of which number is set to 2048, and CAUSE refines segmentation features via concept-wise self-supervised learning by using the learned Concept Clusterbook. The most different point is that CAUSE does not classify clusteres by using the learned Concpet clusterbook but use it only to perform self-supervised learning, on the other had, ACSeg uses the concept to classify clusters.

This is why only a very marginal performance improvement was achieved through ACSeg.

Beyond, we provide technical contribution for this modularity maximization because ACSeg did not provide any code implementation, thus we cannot identify its reproducibility, either, but CAUSE provide all of code implementation and its weight parameters depending on various datasets.

JizeCao commented 1 year ago

Thanks for the detailed explanation! I appreciate authors' great effort on this work. 👍