chrisranderson / beholder

A TensorBoard plugin for visualizing arbitrary tensors in a video as your network trains.
461 stars 20 forks source link

Drop Pillow dependency #40

Closed chrisranderson closed 7 years ago

chrisranderson commented 7 years ago

I should be able to use https://www.tensorflow.org/versions/r0.12/api_docs/python/image/resizing#resize_images and https://www.tensorflow.org/versions/r0.12/api_docs/python/image/encoding_and_decoding to handle all Pillow stuff. I'm wondering what the performance will be like in comparison.

jart commented 7 years ago

TensorFlow uses libpng under the hood. It's able to encode PNGs in microseconds, if you construct the graph and session beforehand. Cold start performance is something we're still working on.

chrisranderson commented 7 years ago

Why does TensorFlow require you to construct a graph for PNG encoding? I would imagine that things you wouldn't use in a neural net wouldn't require a session.

chrisranderson commented 7 years ago

I'll wait to do this until I know more about whether it will be merged into TensorBoard (#33). There's more complexity to accomplish the same thing, with no benefits that I've seen (one small performance test actually showed this method was slower) besides dropping the dependency, which I only need to do if this repository is going to be merged. I would need to:

jart commented 7 years ago

That's fair. But it's worth noting that we're adding an API right now to make this easier for you. https://github.com/tensorflow/tensorboard/pull/320

As for whether or not it will be merged, I will say that I am very excited about the potential of this contribution, and we'll know for certain when we collect feedback. I've been holding off on collecting feedback because you indicated you might make the video. I can collect feedback with or without it, I just think it's easier to attract support if there is one. Tell me how you want to proceed.

chrisranderson commented 7 years ago

That was a great idea to wait - alright, you've persuaded me to try a video! I'm planning to have it finished by tomorrow. I'll send you and @dandelionmane a link tomorrow to take a look at or give up. :)

Could I ask a huuuuge favor? Would you mind trying to install it and play around with it sometime? I would hate for the big debut to flop because e.g. it doesn't work on Macs or something, which I haven't tested.

wchargin commented 7 years ago

(FYI, Dandelion is on vacation for the rest of August.)

We all took a look as a group recently. We were very impressed by the x-multipart-replace for the PNG streaming—none of us had seen that before, and we all thought it was very clever. Once we got over that, we were trying to interpret the visualization and all the options. (We determined that the first long-skinny-image was probably the raw MNIST data, where each row of the image corresponds to an input image reshaped to (1, 784), and that the rest were probably the magnitudes of gradients at various layers of the network, but we weren't sure.) We didn't know what all the options meant, which is why we asked for better docs. This test was on Ubuntu.

I second the suggestion of a video demo. When I'm considering using a piece of software, a well-structured video demo can go a long way.

Create a new graph every time the shape of the image changes

You don't need to do this: use a placeholder with unspecified static shape. (The PR that I created at https://github.com/tensorflow/tensorboard/pull/320 demonstrates how to do this; it's private right now, but I'll expose it as a public API.)

wchargin commented 7 years ago

Figure out GPU memory allocation, since TensorFlow eats up all memory by default (I'd bet it's not too hard to tell TensorFlow to use the CPU for these operations)

with tf.device("/cpu:0"):

chrisranderson commented 7 years ago

I deserve nigh unto zero credit for the multipart response. I didn't think about this until now, but maybe I should leave a comment explaining where it came from. I don't want to give the impression that I came up with it. The original source is here, I think: https://blog.miguelgrinberg.com/post/video-streaming-with-flask.

It also doesn't work with Edge, but I'm guessing that's not a big deal.

There is some more info on the README now that would be helpful in interpreting the video, but I'll still get cracking on that video. I'm thinking there will be pretty high overlap between the content of the README and the video.

FYI, you were probably looking at tf.trainable_variables(). White meant high variance in the value of that particular parameter over time, black meant low variance, all relative to that particular layer. I made changes to how 2D layers are visualized a little while ago - you may have been looking at the less interpretable version. Now, rows correspond to weights all tied to the same input node, and columns correspond to weights all tied to the same output node... I think. :)

chrisranderson commented 7 years ago

@jart I think I'll be recording a "final" go sometime within the next few hours.

chrisranderson commented 7 years ago

Whoops, duplicated the issue. See #48 for more updates.