connorlee77 / pytorch-mutual-information

Mutual Information in Pytorch
103 stars 10 forks source link

Vague Meaning #1

Closed PeterClapham closed 2 years ago

PeterClapham commented 2 years ago

I've looked at what the code is doing, and can't seem to find the purpose for the B dimension. Why is one input [x1, x1] and the other [x1, x2]? When finding MI(x1; x2) you'd want one input to simply be the vector x1 and the other input to be the vector x2. Similarly, for the histogram, you'd like to compute a histogram for just x1 rather than [x1, x2]. These vectors would be of shape (N,1). What is the purpose of this concatenation?

connorlee77 commented 2 years ago

The mutual information function in this repository was developed to compute the differentiable mutual information of a batch of image pairs for use in a pytorch network. This enables seamless integration into network training. In the example you mention, I'm computing the MI of image pairs (x1, x1) and (x1, x2). This was done to illustrate that the MI function works as MI(x1, x1) gives MI of 1 and MI(x1, x2) does not.