I often get asked "How does this work?", I generally just explain it's a bot that looks for eyes in the images on /r/all and adds glasses if eyes and a face are found. But, It would be great if I could have a live stream of it working on actual posts, or have a short video showing visual demonstrations of the processes it's doing in the actual order and process the bot is doing it.
1: Show images that get detected.
2: Got an image with eyes? Draw a rect on them.
3: Put glasses on that image.
Let's extend base functionality (Save uptime on crash, get Twitter additions with karma working, etc, then work on this as a secondary priority. No rush.)
I think this would help bring a "wow" factor to the bot for people who aren't willing to dig into the code or read in-depth technical specs. Just a thought.
I often get asked "How does this work?", I generally just explain it's a bot that looks for eyes in the images on /r/all and adds glasses if eyes and a face are found. But, It would be great if I could have a live stream of it working on actual posts, or have a short video showing visual demonstrations of the processes it's doing in the actual order and process the bot is doing it.
1: Show images that get detected. 2: Got an image with eyes? Draw a rect on them. 3: Put glasses on that image.
Let's extend base functionality (Save uptime on crash, get Twitter additions with karma working, etc, then work on this as a secondary priority. No rush.)
I think this would help bring a "wow" factor to the bot for people who aren't willing to dig into the code or read in-depth technical specs. Just a thought.