fenomas / glsl-projectron

WebGL demo to evolve data that projects into a given target image
https://fenomas.github.io/glsl-projectron/
322 stars 26 forks source link

Averaging using mipmaps #3

Open EliasHasle opened 5 years ago

EliasHasle commented 5 years ago

Hello! That's some nice and fast results you get, with such a simple setup. I have been experimenting with something similar myself, with emphasis on GA. The whole GA runs in JS, and just like you, I have found that readPixels is the bottleneck. Thus, my next step is to batch many individual fitnesses into a single framebuffer before reading all of them. Later, I will try to move the individual specifications into vertex textures and try to do the evolution almost solely on the GPU too.

I would just like to share an idea I have used for the averaging: Mipmaps. When you have an image with equal power-of-two dimensions, the smallest, single-pixel, representation in the mipmap will be the average color. This is something the GPU is optimized to do, and does not require repeated application of a custom shader. I simply draw a single-pixel representation of the texture.

And for the batching I am planning, I will just draw multiple single-pixel representations. 16-ish per draw call, then I think I can slide the quad over the buffer for multiple draw calls without clearing it, and then readPixels the whole population fitness at once.

BTW, here is an example of what I have done, in this case using a setup that draws GL points: 2kgen

fenomas commented 5 years ago

Brilliant! Comparing more than one image at once is definitely the logical next step for this project, but I never got around to trying it (I was just starting to learn shaders). I'd be very happy to take any PRs around this!

Regarding mipmaps, the reason I do the "average and reduce" step in a custom shader is precision. My guess is that if you use mipmaps, you'll find that late in the evolution the candidate images are similar enough that their mipmaps (or mipmaps of the diff between the candidate and the target) are identical even at larger sizes. But maybe I'm misunderstanding your idea?

The variation about using GL points instead of vertices looks cool too! It would be neat if this could be abstracted - i.e. if the logic that draws a genome into an image was fully separate from the logic that checks fitness and evolves genomes, so that the user could choose "points or polygons" from in the UI, and then evolve a result using either version.

EliasHasle commented 5 years ago

Comparing more than one image at once is definitely the logical next step for this project, but I never got around to trying it (I was just starting to learn shaders). I'd be very happy to take any PRs around this!

Hm, where you thinking about something like this, or perhaps a simple Genetic Algorithm with only mutations and single-parent offspring? I note that OpenAI have achieved remarkable results with both, in the field of neural policy optimization, i.e. deep reinforcement learning. I doubt I can make a PR, though. Maybe if we found out we could join our projects. You have a cool project name and a Github repository for it. I never find good names for things, which is one of the reasons I don't publish them. 😛

The mipmaps are made in float (I simply specify float type in three.js, but it depends on the extension). I don't see why the result would be different than with repeated downscaling (which is how I understand your approach). Update: Look here

The variation about using GL points instead of vertices looks cool too! It would be neat if this could be abstracted

It is, in my solution. The optimization algorithm is abstracted too, in fact. But I had no big success with vanilla PSO, and limited success with my tweaks to GA. So I will probably make a simpler GA that can run much faster, ideally with all heavy lifting happening on the GPU.

fenomas commented 5 years ago

Hi, sorry - I initially thought you were saying you had started with this repo and built off it, hence what I said about a PR. Also thanks on the mipmap thing - I've only dabbled a tiny bit with shaders, I assumed the GPU generated them as textures in some RGBA format or another. If they're floats that sounds like a great hack!

Regarding GA, when I was working on this I played around with various ways of selecting genomes and doing mutations, but my experience was that complex/clever ideas didn't seem to converge any faster than the most naive approach. So yeah, my guess would definitely be that doing more iterations per readPixel will be way more valuable than doing smarter GA.

Anyway definitely steal anything you like from this repo, e.g. the demo UI. If you post it live somewhere I'd love to have a look!

EliasHasle commented 5 years ago

Thank you for that. :-) The demo UI is really smooth!

Who knows, maybe it will really become a PR.

fenomas commented 5 years ago

The best advice I can give is never ever neglect the UI.... very few people will try a project if they have to fork it and then npm i, but if they can just drag an image onto the browser, then... 😀