lmas / opensimplex

This repo has been migrated to https://code.larus.se/lmas/opensimplex
MIT License
241 stars 29 forks source link

A small issue regarding noiseXarray #22

Closed zodiuxus closed 2 years ago

zodiuxus commented 2 years ago
>>> n=simplex.noise3array(x=10, y=10, z=10)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Python39\lib\site-packages\opensimplex\opensimplex.py", line 34, in noise3array
    return _noise3a(x, y, z, self._perm, self._perm_grad_index3)
  File "C:\Python39\lib\site-packages\opensimplex\opensimplex.py", line 111, in _noise3a
    noise = np.zeros(x.size, dtype=np.double)
AttributeError: 'int' object has no attribute 'size'

Perhaps a different attribute needs to be used?

lmas commented 2 years ago

Those functions expects numpy arrays as inputs, not ints like you tried to use, but I think I've forgot to document that..

I'll try take a closer look at this later this evening!

zodiuxus commented 2 years ago

Oh! My bad, I thought that it would for some reason return an array from given values, rather than having to pass it an array. I assume that it also returns an array of values then?

Also, I wanted to know if it uses Numba upon detecting it by default - making it run on CUDA cores, or do I have to force CUDA accel myself?

lmas commented 2 years ago

Yap it will return an array. As an example you can take a look at the test file in the tests/ dir. I'll make sure to update the docs!

And yap, the lib will try to use numba automatically. Don't have much experience with it though so I would be really gratefull if it got some more usage and testing. -- Alex

On Thu, 6 Jan 2022, at 17:08, zodiuxus wrote:

Oh! My bad, I thought that it would for some reason return an array from given values, rather than having to pass it an array. I assume that it also returns an array of values then?

Also, I wanted to know if it uses Numba upon detecting it by default - making it run on CUDA cores, or do I have to force CUDA accel myself?

— Reply to this email directly, view it on GitHub https://github.com/lmas/opensimplex/issues/22#issuecomment-1006710808, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABBDCXMXJPCNU5Z2XSUQI4DUUW5AHANCNFSM5LMPV5AA. You are receiving this because you commented.Message ID: @.***>

zodiuxus commented 2 years ago

I'll go ahead and test out a function with @vectorize and one without and compare the time it takes for each.

lmas commented 2 years ago

Sounds interesting, much appreciated!

-- Alex

On Thu, 6 Jan 2022, at 17:33, zodiuxus wrote:

I'll go ahead and test out a function with @vectorize https://github.com/vectorize and one without and compare the time it takes for each.

— Reply to this email directly, view it on GitHub https://github.com/lmas/opensimplex/issues/22#issuecomment-1006729989, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABBDCXMG57QHDBYFKBJJQA3UUW75LANCNFSM5LMPV5AA. You are receiving this because you commented.Message ID: @.***>

zodiuxus commented 2 years ago

I've been banging my head the entire day trying to get this thing to work. It seems that it's unnecessary to force vectorization, or it needs to be done in the module itself, I'm not sure. Removing the @vectorize puts away this issue as it's not using CUDA, or multiple cores.

Your module already does caching by default which does speed up the process, but the issue lies elsewhere it seems.

  warn(NumbaDeprecationWarning(msg))
Traceback (most recent call last):
  File "E:\Programming\Projects\PiDPHW\main.py", line 14, in <module>
    def testfunc(x,y):
  File "C:\Python39\lib\site-packages\numba\np\ufunc\decorators.py", line 125, in wrap
    vec.add(sig)
  File "C:\Python39\lib\site-packages\numba\np\ufunc\deviceufunc.py", line 397, in add
    corefn, return_type = self._compile_core(devfnsig)
  File "C:\Python39\lib\site-packages\numba\cuda\vectorizers.py", line 15, in _compile_core
    cudevfn = cuda.jit(sig, device=True, inline=True)(self.pyfunc)
  File "C:\Python39\lib\site-packages\numba\cuda\decorators.py", line 104, in device_jit
    return compile_device(func, restype, argtypes, inline=inline,
  File "C:\Python39\lib\site-packages\numba\cuda\compiler.py", line 412, in compile_device
    return DeviceFunction(pyfunc, return_type, args, inline=True, debug=False,
  File "C:\Python39\lib\site-packages\numba\cuda\compiler.py", line 443, in __init__
    cres = compile_cuda(self.py_func, self.return_type, self.args,
  File "C:\Python39\lib\site-packages\numba\core\compiler_lock.py", line 35, in _acquire_compile_lock
    return func(*args, **kwargs)
  File "C:\Python39\lib\site-packages\numba\cuda\compiler.py", line 163, in compile_cuda
    cres = compiler.compile_extra(typingctx=typingctx,
  File "C:\Python39\lib\site-packages\numba\core\compiler.py", line 686, in compile_extra
    return pipeline.compile_extra(func)
  File "C:\Python39\lib\site-packages\numba\core\compiler.py", line 428, in compile_extra
    return self._compile_bytecode()
  File "C:\Python39\lib\site-packages\numba\core\compiler.py", line 492, in _compile_bytecode
    return self._compile_core()
  File "C:\Python39\lib\site-packages\numba\core\compiler.py", line 471, in _compile_core
    raise e
  File "C:\Python39\lib\site-packages\numba\core\compiler.py", line 462, in _compile_core
  File "C:\Python39\lib\site-packages\numba\core\compiler_machinery.py", line 343, in run
    raise patched_exception
  File "C:\Python39\lib\site-packages\numba\core\compiler_machinery.py", line 334, in run
    self._runPass(idx, pass_inst, state)
    return func(*args, **kwargs)
  File "C:\Python39\lib\site-packages\numba\core\compiler_machinery.py", line 289, in _runPass
    mutated |= check(pss.run_pass, internal_state)
  File "C:\Python39\lib\site-packages\numba\core\compiler_machinery.py", line 262, in check
    mangled = func(compiler_state)
  File "C:\Python39\lib\site-packages\numba\core\typed_passes.py", line 105, in run_pass
    typemap, return_type, calltypes, errs = type_inference_stage(
  File "C:\Python39\lib\site-packages\numba\core\typed_passes.py", line 83, in type_inference_stage
    errs = infer.propagate(raise_errors=raise_errors)
  File "C:\Python39\lib\site-packages\numba\core\typeinfer.py", line 1074, in propagate
    raise errors[0]
numba.core.errors.TypingError: Failed in cuda mode pipeline (step: nopython frontend)
Unknown attribute 'noise2' of type Module(<module 'opensimplex' from 'C:\\Python39\\lib\\site-packages\\opensimplex\\__init__.py'>)

File "main.py", line 15:
def testfunc(x,y):
    return os.noise2(x,y)
    ^

During: typing of get attribute at E:\Programming\Projects\PiDPHW\main.py (15)

File "main.py", line 15:
def testfunc(x,y):
    return os.noise2(x,y)
    ^
lmas commented 2 years ago

Unknown attribute 'noise2' of type Module That seems odd? Are you using opensimplex v0.4? And instead of the module helpers you could try import the OpenSimplex class and call that instead (the old method).

I'm having exams in a week, so I really can't dig deeper into bigger issues like this for the next week or so :(

zodiuxus commented 2 years ago

Unfortunately, I have exams starting next week as well. I think I tried with the old method (by directly calling OpenSimplex.Function) which gave me the same result, but I'll give it another shot.

-------- Original Message -------- On Jan 7, 2022, 08:47, Alex wrote:

Unknown attribute 'noise2' of type Module That seems odd? Are you using opensimplex v0.4? And instead of the module helpers you could try import the OpenSimplex class and call that instead (the old method).

I'm having exams in a week, so I really can't dig deeper into bigger issues like this for the next week or so :(

— Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android. You are receiving this because you authored the thread.Message ID: @.***>

zodiuxus commented 2 years ago

I tried your idea, still an issue occurs. I don't know if it's got to do with my own computer or what, but it is a bit annoying. I won't paste the whole Numba error block, so here's the important part:

numba.core.errors.TypingError: Failed in cuda mode pipeline (step: nopython frontend)
Untyped global name 'os': Cannot determine Numba type of <class 'opensimplex.opensimplex.OpenSimplex'>

File "main.py", line 15:
def cuda_os(a,b):
    return os.noise2(a,b)
    ^

Which is done on the following source code:

from numba.cuda import target
from numba.np.ufunc.decorators import vectorize
from opensimplex import OpenSimplex

os = OpenSimplex()

@vectorize(['float32(float32, float32)', 'float64(float64, float64)'], target='cuda')
def cuda_os(a,b):
    return os.noise2(a,b)

And again with

def cuda_os(a,b):
    os = OpenSimplex()
    os.__init__(1234)
    return os.noise2(a,b)

where it returns

numba.core.errors.TypingError: Failed in cuda mode pipeline (step: nopython frontend)
Untyped global name 'OpenSimplex': Cannot determine Numba type of <class 'type'>

It's weird to see that this only happens with the vectorize tag, or anything that uses CUDA. Anything else gives me some deprecation warnings but does its thing anyway.

I came back to writing this after looking at how and what CUDA supports: http://numba.pydata.org/numba-doc/latest/cuda/cudapysupported.html and it may have to do with most likely the _init function, which returns 2 lists, the return of which is unsupported by CUDA - as per the last line in the link: functions that returns a new array.

perm = np.zeros(256, dtype=np.int64)
perm_grad_index3 = np.zeros(256, dtype=np.int64)

Looks like this is one thing that's preventing this module from becoming fully CUDA-parallelizable, which I would really love to see happen. It may happen that I've missed a detail in a different function, but from what I read they only make calculations to make the noise.

Have you considered alternatively using PyTorch in place of Numba? PT allows you to set your device for all necessary calculations, and from the looks of it, it utilizes the some functions NumPy does, and can convert NP lists to PT ones. To be fair, this would eliminate the need to force CUDA accel, as it can be done by simply passing an argument for which device to be used.

zodiuxus commented 2 years ago

Well, to finish off this interesting part of the project, I've gone ahead and made some changes to make it use PyTorch for easier parallelization, rather than having Numba be a pain in the neck.

Here's the fork: https://github.com/zodiuxus/opensimplex and here's the result from running main.py in the ./tests/ folder:

(-0.7320515695722086, 'cuda', 1.1209726333618164)

(-0.7320515695722086, 'cpu', 0.008008718490600586)
zodiuxus commented 2 years ago

I've also found another issue with the noise generators that don't use the arrays as input:

For some reason, numbers seem to randomly go on a massive exponent regardless of whether I make them follow the rule I've set for them. I do notice a pattern here with this being at (0,0) and (3,3), (4,2), and (2,4)

>>> xy = [[(0 if os.noise3(i, j, 0.0)<-1 else os.noise3(i, j, 0.0),(i,j)) for i in range(5)] for j in range(5)]
>>> print(xy)
[[(5.2240987598031274e-67, (0, 0)), (-0.23584641815494026, (1, 0)), (0.04730512605377769, (2, 0)), (-0.03559870550161819, (3, 0)), (0.4973430820248515, (4, 0))],
[(-0.02225418514523135, (0, 1)), (0.09588876902792749, (1, 1)), (-0.2394822006472489, (2, 1)), (-0.4947860481841064, (3, 1)), (-0.38147748611610544, (4, 1))],
[(0.2314115625873986, (0, 2)), (0.16181229773462766, (1, 2)), (0.0754324983019698, (2, 2)), (0.022254185145231333, (3, 2)), (2.1910484798468403e-65, (4, 2))],
[(-0.12944983818770212, (0, 3)), (-0.23908266410963255, (1, 3)), (0.2899836190019574, (2, 3)), (4.4558489421850186e-66, (3, 3)), (0.031883015701786116, (4, 3))],
[(-0.07543249830196942, (0, 4)), (-0.2455551560190179, (1, 4)), (-1.103206738099601e-65, (2, 4)), (-0.15817651524231863, (3, 4)), (-0.06776139677973557, (4, 4))]]

Could it be an overflow?

lmas commented 2 years ago

Could it be an overflow?

~~I've seen this happen before when you're using the dimensions above 2D. I rechecked against another implementation in Go and got the same result, so it's probably an artefact of the algorithm. Overflowing is part of the algo I think (well at least when seeding).~~ Disregard this, see below..

It's weird to see that this only happens with the vectorize tag...

Nah I had the same struggles and errors when trying to add the @njit tag. Ended up breaking out the functions from the class, which caused the big refactor for v0.4, before Numba would behave. Not too happy about the problems it's been causing (and excess logging).

PyTorch for easier parallelization, rather than having Numba be a pain in the neck.

Yeah working with Numba has been a struggle. But there was a demand for it (see #4) and I was happy to be able to add it as an optional dependency. Not so sure about replacing it with PyThorch yet, as that would result in a hard dep. that might be too heavy for the casual use cases (like games). I'll take a look at this after next week!

zodiuxus commented 2 years ago

Ah fair enough, it's possible to make a Numba and PyTorch version. I can probably look further into optimizing and cutting down most of the code for PyTorch, but I'd have to do this in about 3 weeks due to finals.

lmas commented 2 years ago

5.2240987598031274e-67

Ugh I just now noticed the negative exponents. Those values are so small it's barely worth noting, you might as well treat them as zero.

lmas commented 2 years ago

Hmm I think I'll close this due to updated docs and going off topic. Haven't had any other demands of pytorch so I won't be doing any work with that for now, but feel free to open a new issue for supporting pytorch or reopen this one if you have further comments about the original issue.