TmacMai / Multimodal-Information-Bottleneck

Multimodal Information Bottleneck: Learning Minimal Sufficient Unimodal and Multimodal Representations (MIB for multimodal sentiment analysis)
MIT License
46 stars 4 forks source link

About visualization #5

Open BigBangMAX opened 1 year ago

BigBangMAX commented 1 year ago

Hello, after reading your paper, I am very interested in your work. But, I'm confused about the part of your paper using t-SNE to visualize datasets, and I'm not sure how to implement it. Would you like to share your related source code? Looking forward to your reply. Thank you!

TmacMai commented 1 year ago

Hi, thanks for your attention to our work. We first save the multimodal representation and the label into npy format. Then we use the following codes to visualize datasets:

label = np.load('label.npy',allow_pickle=True) for i in range(len(label)): if label[i]>0: label[i]=1 elif label[i]<0: label[i]=-1

X = np.load('total_x.npy',allow_pickle=True) y=np.array(label)

tsne = manifold.TSNE(n_components=2, init='pca', learning_rate=10, perplexity=200.0, early_exaggeration=10, random_state=50001, n_iter=50000) x_change = X X_tsne = tsne.fit_transform(x_change)

print("Org data dimension is {}. Embedded data dimension is {}".format(X.shape[-1], X_tsne.shape[-1]))

'''嵌入空间可视化''' x_min, x_max = X_tsne.min(0), X_tsne.max(0) X_norm = (X_tsne - x_min) / (x_max - x_min) # 归一化

f2 = plt.figure(2) idx_1 = np.argwhere(y==1) p1 = plt.scatter(X_norm[idx_1,0], X_norm[idx_1,1], marker = 'x', color = 'm', label='0', s = 30) idx_2 = np.argwhere(y==-1) p2 = plt.scatter(X_norm[idx_2,0], X_norm[idx_2,1], marker = 'o', color = 'b', label='1', s = 50)

plt.legend(loc = 'upper right')