Open simongeilfus opened 9 years ago
Perlin noise doesn't have proper derivatives. If you need derivatives I'd suggest using Perlin's new simplex noise instead,
Thanks for the fast reply Nick, Yes perlin noise is a bit outdated, but it's really convenient to already have that in Cinder. I was about to ask about GLM_GTC_noise. It's based on Stefan Gustavson's implementation. As GLM is probably going to stay, I'm curious about what people think about re-writing the current implementation around those functions?
I've had good experiences with Gustavson's implementation in GLSL. While we're on the subject, would it make sense to replace all of CinderMath.h with GLM implementations also? This would have the added benefit of unifying the way we call math functions for scalars and vectors.
I'd be open to a rewrite around what's in GLM. Cinder's implementation is not the most spectacular, and I imagine there's room to improve the interface as well. @notlion a good bit of the math stuff has been replaced. What are you thinking of specifically?
@andrewfb I'm referring specifically to CinderMath.h. Most (if not all?) of the functions there could be brought over from GLM with a using
. Personally I end up calling these math functions from std:: or glm:: because the Cinder ones can't infer types. I'd much prefer a math::sqrt()
call to math<float>::sqrt()
.
@andrewfb I forgot to mention that I mean specifically the math:: struct. Some of the more complicated functions don't have a GLM equivalent.
I've tested Gustavson's C++ implementations extensively; they're good, and fast.
I'm also using glm pretty heavily, although I have the luxury of using clang's native implementations (and consequent optimizations!) in my work. If you have xcode 6, take a peek in <simd/simd.h> :)
One thing I might suggest as a slight alternative, in order not to disrupt the already existing Perlin class and the way it does things: we could extend / wrap the noise functions provided by GLM_GTC_noise with features such as derivatives, different numbers of octaves of fbm noise, etc, perhaps in a new file such as cinder/Noise.h. The thing I like about this approach is that we can build on what looks to be the most popular noise functions (simplex mainly) while keeping a more functional style that doesn't require constructing a new object to create a noise pattern.
@meshula glm + clang + xcode 6? Or are you refererring to noise functions in clang / xcode 6? Where do we 'peek'?
It looks like it stripped the file paths from my comment, gosh!!
simd/simd.h in the latest SDK shows all the new stuff. Imagine angle brackets :)
Basically, all the GLSL functions, and the OpenCL vector/matrix syntax is part of clang now, so now you can write code like
simd::float4 a = {1,2,3,4}; simd::float4 b = {0,1,1,3}; simd::float4 c = dot(a, b); b = c.xyxy;
and so on and so forth. The generated code will use SIMD registers appropriately, and do all the nice optimizations. This stuff isn't fully cross platform yet, but it will be once VS has the latest clang available.
Thanks for the feedbacks!
I've been comparing a bit Cinder, Glm and Gustavson's implementations. Apparently glm simplex noise is much much slower than ci::Perlin or the original implementation (ci: 52ms, glm: 365ms, Gustavson: 45ms for 640x480 Channel). I've not really dig into the sources to be able to tell the reasons, but I'm probably going to try to write something from the Gustavson's sources.
@andrewfb Do you remember the idea behind those "if" I was mentioning above? Should we get rid of them until there's a proper new implementation?
@richardeakin I get what you say... but what about leaving the API as it is now but changing it's implementation and add other noise classes like Simplex, Curl, etc... Once the different classes are done we can think of static/convenient versions of them. What do you think?
Well I guess the point for me would be to move towards a functional interface as we're already accustomed to using in GLSL. I don't see the need for having to construct a class to create noise, the seed is already provided as part of the vec
Strange to hear that GLM's simplex implementation is that much slower that the original, I bet @g-truc would want to know about that.
I agree. I wrote a quick first draft : https://gist.github.com/simongeilfus/fc15d834e7a24d1ecd6c It's not finished but it's a good start. The .cpp file is more or less Stefan Gustavson code, I just adapted to GLM types, and added different types of noise sums.
I also started working on rescaling the derivatives output as they don't seem to be in the -1,1 range.
I added dfBm functions and a few other experimental sums from Iñigo Quilez. The one driven by a mat2 is really interesting.
There's something I cant really understand about Stefan Gustavson noise derivatives implementation. From my understanding one of the advantage of simplex perlin noise is to have a range closer to -1,1. And from what I can read in Stefan Gustavson he's rescaling the noise to match the look of perlin instead of staying in the same range as regular simplex noise. (I'm not talking about rescaling the derivatives, I'm talking about rescaling the whole thing so that the noise itself fall between -1 and 1,... derivatives are larger than that). In my opinion it would be more instinctive to have the same kind of values when you call Simplex::noise( ... ) and Simplex::dnoise( ... ).x.
So I just experienced the same jarring discontinuities with dfBm. These discontinuities are invisible in the samples because the returned values are used as forces/accelerations, which smooths out any glitch.
As as side note, I wanted to try out the glm::noise3
function. However, I can't successfully include it on Windows, even with all the following:
#include <glm/gtc/noise.hpp>
#include <glm/detail/_noise.hpp>
#include <glm/detail/func_noise.hpp>
Anyone had luck? I'm assuming there might be a macro definition conflict here, or something.
I came across a similar problem as @num3ric.
I managed to include the glm::noise3
functions in Mac with #include "glm/detail/func_noise.hpp"
, but when called with a vec3 as argument I get two compile errors:
glm/detail/_noise.hpp
test code:
#include <glm/detail/func_noise.hpp>
vec3 noise31 = glm::noise3(1.0f); // good
vec3 noise32 = glm::noise3(vec2(1.0f)); // good
vec3 noise33 = glm::noise3(vec3(1.0f)); // compile error
vec3 noise34 = glm::noise3(vec4(1.0f)); // compile error
@araid I think but I'm not sure that the stuff in detail/func_noise.hpp is defunct, I opened the issue you see above on glm repo to ask. I believe you should be using someting like the following instead:
#include <glm/gtc/noise.hpp>
vec3 n = glm::simplex( vec3( 1 ) );
@simongeilfus thanks for the simplex noise initiative. I had to change the includes to build on OS X. I think there might be when wrapping dnoise( vec3 )
indices. I am not sure if I can send pull requests to gists, so here are the changes: https://gist.github.com/gaborpapp/1eb8b3366840843e6fb0/revisions
@meshula A little bit late, but wanted to ask why you said: "Perlin noise doesn't have proper derivatives.". What is the math behind that? I am pretty sure that Perlin does have an easy to compute analytical gradient, I haven't looked at the Cinder implementation but perhaps it's just wrong?
Ken published various tweaks to his noise over the years to try to solve the problem, and the problem is solved in Simplex noise. But his original easy to evaluate noise function has non zero values in the second derivative. cf. section 5.3 http://http.developer.nvidia.com/GPUGems/gpugems_ch05.html
@meshula Ah, okay, I thought you meant there isn't a good way to calculate the derivates. A good example of the edge behaviour of the "improved Perlin" noise derivates is:
https://briansharpe.files.wordpress.com/2015/07/perlin2dderiv.png
And it's talked about in the post:
https://briansharpe.wordpress.com/2015/07/19/analytical-noise-derivatives/
Simplex basically improves everything about Perlin noise, but I don't know, somehow I always still use the "classic" improved Perlin noise anyway, perhaps I just like the simplicity of the algorithm... A few layers should hide the derivate artefacts along the lattice edges.
Although, one thing about Cinder's fBm is that it is always doubling the frequency... that means the lattice lines up across all the layers, which probably doesn't help the case above. Change the multipliers a bit, or add something to the coordinates, so the grids are shifting for each layer and that should help a lot.
So in regards to the original post, I would suggest just quickly trying to adjusting dfBm and I think you won't see the bands.
But also, I am pretty sure dfBm is incorrect in how it handles the gradients.
Hi all,
I got really use to having access to noise derivatives in shaders, and was surprised to discover that cinder cpu implementation has some nasty artefacts. I'm trying to figure out what is wrong and was wondering if someone remember the reason behind those if here and there?
Here's a screenshot with the current implementation :
And another after removing the two if in the
Perlin::dnoise
method:I can still see some vertical lines, but it's already much better, so I'm really wondering what is the use of those two conditions? Obviously those vertical lines (and horizontal with the second derivative) create a lot of issues, whether you use derivatives for animations, modeling or texturing.
Am I missing something on how to use
Perlin::dnoise
andPerlin::dfBm
?You can check out this branch and try out with the code that I used to make the above screenshots:
This is somehow unrelated to this issue but now that I'm asking about noise derivatives. I find it really convenient for the dnoise to also return the noise. ex. dnoise( vec2 ) would return a vec3 with the noise itself in the x component and the first and second derivative in the y and z components. It seems to be a common way of doing it.