keras-team / keras

Deep Learning for humans
http://keras.io/
Apache License 2.0
61.66k stars 19.42k forks source link

visualize the newly learned weights in t-sne #5204

Closed vinayakumarr closed 6 years ago

vinayakumarr commented 7 years ago

Has any body know how to visualize the newly learned weights in t-sne. I have used keras for lstm with numeric features and also for text classification. I want to plot the newly learned weights in t-sne

RamaneekGill commented 7 years ago

I haven't visualized weights in a while, however I do visualize my embeddings for most data. I imagine that it probably is the same thing. Just call the layer and get the weights and then apply manifold learning on it to reduce it to 2 dimensions and then visualize it with a scatter plot.

Resources I often fall back on to are:

Here is some code for visualization to get you started:

def plot_embedding(x, y):
    cm = plt.cm.get_cmap('RdYlGn')
    f = plt.figure(figsize=(13, 13))
    ax = plt.subplot(aspect='equal')
    sc = ax.scatter(x[:,0], x[:,1], lw=0, s=40, c=y, cmap=cm)
    plt.xlim(-25, 25)
    plt.ylim(-25, 25)
    ax.axis('off')
    ax.axis('tight')

    plt.show()

plot_embedding(transformed_weights, weight_labels)

Code to get you started for t-sne with sklearn

weights = model.get_layer(index={your layer index}).get_weights()
tsne = TSNE(n_components=2, random_state=random_seed, verbose=1)
transformed_weights = tsne.fit_transform(weights)

If this helps please close the issue :)

AritzBi commented 7 years ago

I'm trying to visualize the output of my last dense layer which output is 512. I've applied t-sne in order to reduce the dimensionality and visualize them. However, all the test samples have been place in a big cluster. Should I reduce the dimensionality with some other technique or it's just like that and I cannot do anything?

stale[bot] commented 7 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.

psgl commented 7 years ago

@AritzBi According to this article in distill, you might want to try different parameter settings.

stale[bot] commented 6 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.

bagustris commented 2 years ago

I haven't visualized weights in a while, however I do visualize my embeddings for most data. I imagine that it probably is the same thing. Just call the layer and get the weights and then apply manifold learning on it to reduce it to 2 dimensions and then visualize it with a scatter plot.

Resources I often fall back on to are:

Here is some code for visualization to get you started:

def plot_embedding(x, y):
    cm = plt.cm.get_cmap('RdYlGn')
    f = plt.figure(figsize=(13, 13))
    ax = plt.subplot(aspect='equal')
    sc = ax.scatter(x[:,0], x[:,1], lw=0, s=40, c=y, cmap=cm)
    plt.xlim(-25, 25)
    plt.ylim(-25, 25)
    ax.axis('off')
    ax.axis('tight')

    plt.show()

plot_embedding(transformed_weights, weight_labels)

Code to get you started for t-sne with sklearn

weights = model.get_layer(index={your layer index}).get_weights()
tsne = TSNE(n_components=2, random_state=random_seed, verbose=1)
transformed_weights = tsne.fit_transform(weights)

If this helps please close the issue :)

How to get weight_labels in this case? For example, my last layer consists of 512 nodes, but the label for the test is, say, 1000.

MoritzKronberger commented 10 months ago

I think what you are interested in, is visualizing the embeddings your model produces, not necessarily the weights themselves?

To do this, you can create a second model that ends at your embedding layer (usually the penultimate one):

from tensorflow.keras import Model

model_input = model.input
model_embed = model.get_layer("my_embedding_layer").output

# Create new model that only reaches to embedding layer
embed = Model(inputs=model_input, outputs=model_embed)

You can now use this model to generate embeddings (for example on your test data) by running predictions:

embeddings = embed.predict(x_test)

Now, you can use T-SNE just like @RamaneekGill described to reduce the dimensions of your embeddings:

from sklearn.manifold import TSNE

# Fit T-SNE on embeddings
tsne = TSNE(n_components=2, random_state=42, verbose=1)
reduced_weights = tsne.fit_transform(embeddings)

In this case, your weight_labels would simply be your test labels y_train. (Depending on your labels' encoding, you might have to convert them for use with matplotlib.)

This approach is based on this blog post.