Open chrisranderson opened 6 years ago
Great! Thanks for the update.
Okay, here it is (5 minutes 44 seconds): https://www.youtube.com/watch?v=06HjEr0OX5k. Viewing in 1080p makes the text look good.
I'm not particularly proud of it, but I'm also not sure I want to spend a lot more time on it.
I'm also a bit worried about people cloning it right now. I'm having an issue with tf.contrib.util.make_ndarray
... it seems buggy. It takes forever the first time after TensorBoard is started. I timed it at 110 seconds, and then it's fine afterward at less than a millisecond.
I'll take a look at the video now! :-)
110 seconds to make_ndarray
‽ That is certainly not expected. It takes half a second for me to convert a 10000×10000 TensorProto
to a numpy array using tf.make_ndarray
(you don't need to use contrib, by the way). Can you reproduce this
What data type and size is the tensor that you're trying to convert to an ndarray? I know that make_ndarray
sometimes has issues with string tensors, but those shouldn't cause performance problems.
Here is the issue with more information: https://github.com/tensorflow/tensorflow/issues/12043
tensor_proto dtype: DT_UINT8
tensor_shape {
dim {
size: 1059
}
dim {
size: 768
}
}
Easy fix in other thread.
@jart @wchargin As people try it out, can you let me know how many and how it goes? I'm about to the point where I'd feel comfortable posting it somewhere, but I'd like to roll it out pretty slowly in case something goes wrong.
@chrisranderson Thank you for making the video! I will start sharing this with people on the Brain team on Monday.
@jart thank you! Any feedback would be much appreciated.
I made a change that tries to detect convolutional activations and visualize them appropriately. Here's an example, which I think is pretty cool. You might have to download the 14.3 MB image to get a good look. Here are VGG activations on a bird dataset: https://github.com/chrisranderson/beholder/blob/master/readme-images/convolutional-activations.png
Very cool.
Dumb question: the script looks like it's running on MNIST data, but I can swear that I see a bunch of bird-detecting neurons, and I don't see any digits…
…am I missing something?
One more question: you include arrays=activations + […]
where activations
includes image_shaped_input
, but the PNG that you attached doesn't seem to have any image input at the top (of either birds/flowers or digits). Is that just cropped out?
Although it would be pretty awesome if VGG used birds as an intermediate representation for handwritten digits, that image is from a run on a totally different network that I don't have code for in the repo. :) Sorry about the confusion.
Here's an example from the demo script in the repo:
This resolves my confusion. Thanks. :-)
Now that we've established there's strong support for merging Beholder, you are guaranteed to have a large user base when all is said and done. The only question now is, how famous do you want to be?
If you want to go viral, here's the narrative I would propose for a new video:
Add the magma color map and make the frame rate as high as you possibly can, to optimize fabulousness.
Might make you cringe, but it would capture attention if you said something like: Beholder gives you a live stream into your neural network. It shows you what TensorFlow is thinking in real time. Say this in the first five seconds. Then append some talking points about how neural networks are mysterious, and that even the best researchers in the world sometimes feel like they don't understand what's going on inside them; but thankfully, Beholder provides much-needed transparency that helps everyone see the big picture. Be as 100 IQ as possible in this intro segment.
Then say, "To demonstrate, here's a thirty (or sixty) second time-lapse of TensorFlow learning how to recognize handwritten digits." I would also recommend kludging an accuracy scalar plot into the bottom left corner, which would greatly increase the emotional and informative appeal of the time lapse. Narrate the time lapse using a layman explanation that has some big words sprinkled in, e.g. convolutional layer.
Now go full technical, show code, and basically do all the things your current video does.
My thinking is basically that, in order for your video to go viral on social media, it needs to be emotionally relatable to media people.
I also want to mention that there's actually a good chance that the media content you already created will get a lot of traction. My suggestions are merely something that I think will help it get even more.
How famous do you want to be?
Let's see, Justin Bieber has 99.1 million followers on Twitter, so whatever it takes to get to more than that. :)
My vote is: let's use whatever marketing potential there is in this code for the benefit of TensorBoard, and keep things on the down low until it is finally released as a part of TensorBoard itself. I think it might be interesting to share it with an outside-of-Google audience for beta testing, though.
Do you think someone at Google could make use of your ideas in a TensorBoard video?
That's very gracious of you. I do believe TensorBoard and its users come first. I also believe our contributors who make TensorBoard better for them deserve to be as successful as possible. So I'm happy to play this however you want. So I guess we can put those ideas on the shelf for a video in the future.
With that said, would you like me to ask our people to tweet and promote your existing video tomorrow?
If you'd like to, that's fine. I'm a bit worried about it being fragile. I've never tried it on OS X or Windows, and I have noticed a bug that requires me to refresh the page, relating to stopping and restarting training or something. I'm headed on vacation Wednesday so my time to put out fires will be limited.
I wouldn't stress out about it. Remember that this would be a beta announcement, it's not a production service, and it's open source. The expectations are not high. However it does say good things that you're holding yourself to high standards regardless.
However, be sure to check that anyone who sends you a pull request has signed Google's CLA before merging. Even if it's a single line. Otherwise we're going to have track them down later, or redo their changes.
Hey do you have a twitter account? What's the name?
Thank you for the encouragement! I'll be available for a lot of tommorow to manage things. I'll make sure to mention the CLA.
On Tue, Aug 15, 2017, 12:41 AM Justine Tunney notifications@github.com wrote:
I wouldn't stress out about it. Remember that this would be a beta announcement, it's not a production service, and it's open source. The expectations are not high. However it does say good things that you're holding yourself to high standards regardless.
However, be sure to check that anyone who sends you a pull request has signed Google's CLA before merging. Even if it's a single line. Otherwise we're going to have track them down later, or redo their changes.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/chrisranderson/beholder/issues/42#issuecomment-322389311, or mute the thread https://github.com/notifications/unsubscribe-auth/AFNVlg38zR7mOCoZqpwdTNtoHICQippmks5sYT2CgaJpZM4OtFL1 .
Yep, it's @chrisdotio https://mobile.twitter.com/chrisdotio.
On Tue, Aug 15, 2017, 12:45 AM Justine Tunney notifications@github.com wrote:
Hey do you have a twitter account? What's the name?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/chrisranderson/beholder/issues/42#issuecomment-322389865, or mute the thread https://github.com/notifications/unsubscribe-auth/AFNVlvRn-Yd0Ulogv4QPP8EpK9mYcMJPks5sYT6AgaJpZM4OtFL1 .
Great. I requested a tweet between 9 and 12 a.m. PST tomorrow. You should be mentioned.
@jart @wchargin: I'm almost there - I almost had it ready today, but I'm not very good with video editing software, so I'll have to give editing another go. I might not finish today, but definitely will be done by tomorrow.