Golova1111 / mspaint_genetic_algo

1 stars 0 forks source link

Error running main.py #2

Open tugrul512bit opened 2 years ago

tugrul512bit commented 2 years ago

Traceback (most recent call last): File "main.py", line 7, in demo_pic = image.imread('pic/berlin_xsm.jpg').astype(np.int16) File "/home/tugrul/anaconda3/lib/python3.7/site-packages/matplotlib/image.py", line 1417, in imread with Image.open(fname) as image: File "/home/tugrul/anaconda3/lib/python3.7/site-packages/PIL/Image.py", line 2809, in open fp = builtins.open(filename, "rb") FileNotFoundError: [Errno 2] No such file or directory: 'pic/berlin_xsm.jpg'

My system: Ubuntu 18.04LTS, FX8150, GT1030, K420 (x2), 4GB RAM

Golova1111 commented 2 years ago

The file pic/berlin_xsm.jpg is an initial image file that should be processed by the algorithm Upload your own .jpg/.jpeg image (but not .png) to the pic folder and substitute the name of the file in the main script

(will update the readme and add this info, thanks)

Golova1111 commented 2 years ago

you can use 'demo_pic.jpg' that is committed, so just substitute pic/berlin_xsm.jpg with pic/demo_pic.jpg in the main.py script

tugrul512bit commented 2 years ago

Ok, this problem solved and opened up a new problem:

Traceback (most recent call last):
  File "main.py", line 8, in <module>
    init(demo_pic)
  File "/home/tugrul/Downloads/vramdisk/mspaint_genetic_algo-master/ga_run.py", line 19, in init
    d_picture = cuda.to_device(demo_pic)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/cuda/cudadrv/devices.py", line 223, in _require_cuda_context
    with _runtime.ensure_context():
  File "/home/tugrul/anaconda3/lib/python3.7/contextlib.py", line 112, in __enter__
    return next(self.gen)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/cuda/cudadrv/devices.py", line 123, in ensure_context
    newctx = self.get_or_create_context(None)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/cuda/cudadrv/devices.py", line 138, in get_or_create_context
    return self._get_or_create_context_uncached(devnum)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/cuda/cudadrv/devices.py", line 153, in _get_or_create_context_uncached
    return self._activate_context_for(0)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/cuda/cudadrv/devices.py", line 169, in _activate_context_for
    newctx = gpu.get_primary_context()
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/cuda/cudadrv/driver.py", line 529, in get_primary_context
    driver.cuDevicePrimaryCtxRetain(byref(hctx), self.id)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/cuda/cudadrv/driver.py", line 295, in safe_cuda_api_call
    self._check_error(fname, retcode)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/cuda/cudadrv/driver.py", line 330, in _check_error
    raise CudaAPIError(retcode, msg)
numba.cuda.cudadrv.driver.CudaAPIError: [2] Call to cuDevicePrimaryCtxRetain results in CUDA_ERROR_OUT_OF_MEMORY

From Nvidia-X server, it says only 100MB allocated for the task so its not really an out of global memory error.

Second try with different error:

Traceback (most recent call last):
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/errors.py", line 745, in new_error_context
    yield
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/lowering.py", line 273, in lower_block
    self.lower_inst(inst)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/lowering.py", line 370, in lower_inst
    val = self.lower_assign(ty, inst)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/lowering.py", line 544, in lower_assign
    return self.lower_expr(ty, value)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/lowering.py", line 1070, in lower_expr
    res = self.lower_call(resty, expr)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/lowering.py", line 806, in lower_call
    res = self._lower_call_normal(fnty, expr, signature)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/lowering.py", line 1033, in _lower_call_normal
    impl = self.context.get_function(fnty, signature)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/base.py", line 570, in get_function
    return self.get_function(fn, sig, _firstcall=False)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/base.py", line 572, in get_function
    raise NotImplementedError("No definition for lowering %s%s" % (key, sig))
NotImplementedError: No definition for lowering <built-in function sqrt>(int64,) -> float64

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "main.py", line 8, in <module>
    init(demo_pic)
  File "/home/tugrul/Downloads/vramdisk/mspaint_genetic_algo-master/ga_run.py", line 56, in init
    prev_winner=prev_winner
  File "/home/tugrul/Downloads/vramdisk/mspaint_genetic_algo-master/GA.py", line 48, in __init__
    for _ in range(self.POPULATION_SIZE)
  File "/home/tugrul/Downloads/vramdisk/mspaint_genetic_algo-master/GA.py", line 48, in <listcomp>
    for _ in range(self.POPULATION_SIZE)
  File "/home/tugrul/Downloads/vramdisk/mspaint_genetic_algo-master/Picture.py", line 198, in generate_default
    p.gen_picture()
  File "/home/tugrul/Downloads/vramdisk/mspaint_genetic_algo-master/Picture.py", line 38, in gen_picture
    _gen_picture(self)
  File "/home/tugrul/Downloads/vramdisk/mspaint_genetic_algo-master/cuda.py", line 174, in _gen_picture
    _calc_elem_delta[blockspergrid, threadsperblock](picture.d_picture, d_image, d_answer)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/cuda/compiler.py", line 833, in __call__
    kernel = self.specialize(*args)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/cuda/compiler.py", line 844, in specialize
    kernel = self.compile(argtypes)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/cuda/compiler.py", line 860, in compile
    **self.targetoptions)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/compiler_lock.py", line 32, in _acquire_compile_lock
    return func(*args, **kwargs)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/cuda/compiler.py", line 55, in compile_kernel
    cres = compile_cuda(pyfunc, types.void, args, debug=debug, inline=inline)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/compiler_lock.py", line 32, in _acquire_compile_lock
    return func(*args, **kwargs)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/cuda/compiler.py", line 44, in compile_cuda
    locals={})
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/compiler.py", line 603, in compile_extra
    return pipeline.compile_extra(func)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/compiler.py", line 339, in compile_extra
    return self._compile_bytecode()
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/compiler.py", line 401, in _compile_bytecode
    return self._compile_core()
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/compiler.py", line 381, in _compile_core
    raise e
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/compiler.py", line 372, in _compile_core
    pm.run(self.state)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/compiler_machinery.py", line 341, in run
    raise patched_exception
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/compiler_machinery.py", line 332, in run
    self._runPass(idx, pass_inst, state)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/compiler_lock.py", line 32, in _acquire_compile_lock
    return func(*args, **kwargs)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/compiler_machinery.py", line 291, in _runPass
    mutated |= check(pss.run_pass, internal_state)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/compiler_machinery.py", line 264, in check
    mangled = func(compiler_state)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/typed_passes.py", line 442, in run_pass
    NativeLowering().run_pass(state)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/typed_passes.py", line 370, in run_pass
    lower.lower()
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/lowering.py", line 179, in lower
    self.lower_normal_function(self.fndesc)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/lowering.py", line 233, in lower_normal_function
    entry_block_tail = self.lower_function_body()
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/lowering.py", line 259, in lower_function_body
    self.lower_block(block)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/lowering.py", line 273, in lower_block
    self.lower_inst(inst)
  File "/home/tugrul/anaconda3/lib/python3.7/contextlib.py", line 130, in __exit__
    self.gen.throw(type, value, traceback)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/errors.py", line 752, in new_error_context
    reraise(type(newerr), newerr, tb)
  File "/home/tugrul/.local/lib/python3.7/site-packages/numba/core/utils.py", line 81, in reraise
    raise value
numba.core.errors.LoweringError: Failed in nopython mode pipeline (step: nopython mode backend)
No definition for lowering <built-in function sqrt>(int64,) -> float64

File "cuda.py", line 18:
def _calc_elem_delta(curr_image, picture, answer):
    <source elided>
        answer[x + curr_image.shape[0] * y] = math.sqrt(
            2 * (originr - picr)**2 + 4 * (origing - picg)**2 + 3 * (originb - picb)**2
            ^

During: lowering "$134call_method.25 = call $86load_method.1($132binary_add.24, func=$86load_method.1, args=[Var($132binary_add.24, cuda.py:18)], kws=(), vararg=None)" at /home/tugrul/Downloads/vramdisk/mspaint_genetic_algo-master/cuda.py (18)

Same system Ubuntu 18.04LTS, FX8150, GT1030 (1600MB free memory), K420 (x2) (1900 MB free mem each), 4GB RAM

Golova1111 commented 2 years ago

Sorry for the long answer from my side

It is hard to answer this question for sure, do you use the demo_pic.jpg as the source? The possible answer is that the size of the picture should be multiplied by 20

from the readme: Important: the size of the initial picture (height x width) should be multiple be 20, e.g. 180x240 (this limitation goes from the current cuda realization)

Another probable answer is that the size of the picture is big enough (e.g. 2000x3000)

BTW, what is your CUDA version? (output of the nvcc --version command)

Golova1111 commented 2 years ago

I have made a fix that most probably will help you, please pull new changes and retry

tugrul512bit commented 2 years ago

Downloaded new version and it works

Epoch 0: best_score: 28562554387, improved, time: 0.0004887580871582031, delta 0
/home/tugrul/Downloads/vramdisk/mspaint_genetic_algo-master(1)/mspaint_genetic_algo-master/Color.py:238: RuntimeWarning: divide by zero encountered in true_divide
  delta = 1 / delta
Epoch 1: best_score: 28561567378, improved, time: 3.483412027359009, delta 0
Epoch 2: best_score: 26059836597, improved, time: 6.3118250370025635, delta 0
Epoch 3: best_score: 24100076967, improved, time: 9.166004419326782, delta 0
...
Epoch 33: best_score: 17960264902, improved, time: 103.7715368270874, delta 0.9363721184636653
Epoch 34: best_score: 17792942807, improved, time: 106.88602495193481, delta 0.9368860235362068
Epoch 35: best_score: 17712805091, improved, time: 110.11393141746521, delta 0.9331845412679621
Epoch 36: best_score: 17475201062, improved, time: 113.28476810455322, delta 0.9279962944851917
...
Epoch 68: best_score: 14098447349, improved, time: 223.6537356376648, delta 0.9962407627217463
epoch: 1, figures: 2, score: 14098447349
delta score: 36158846
[
    Rectangle(p1=[14, 52], p2=[356, 582], color=[100, 177, 79], color_delta=0, angle=-2.2079872659164383, max_size=(480, 360)),
    Rectangle(p1=[141, 0], p2=[440, 247], color=[172, 123, 90], color_delta=0, angle=-1.7580708462155854, max_size=(480, 360)),
]
...
FileNotFoundError: [Errno 2] No such file or directory: '/home/vadym/University/Term 3/EvolAlg/Project/pic/save/epoch2_2022-02-20 17:58:37.810836.png'

Thank you. I think GT1030 is not very good alone :) and it has 50% GPU utilization (must be Python's latency) but it works and optimizes the best_score which I guess its the pixel difference from original image.

Maybe file not found error is my fault? Should I give it some parameters or edit? I guess I can edit it and solve myself. Thank you.

Here is nvcc output:


/usr/local/cuda-10.2/bin$ ./nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Wed_Oct_23_19:24:38_PDT_2019
Cuda compilation tools, release 10.2, V10.2.89

All cuda programs work but nvcc is not visible outside of that folder.

Golova1111 commented 2 years ago

No, the FileNotFoundError error here is only my fault. I updated the code and removed the hardcoded path from the source (/home/vadym/University..). Please pull and restart

Thanks for reporting the errors!

tugrul512bit commented 2 years ago

Now it works fully

gt1030

How does 75 epochs in 234 seconds (of 100MHz overclocked GT1030) compare to other GPUs? Do you have benchmarks?

Golova1111 commented 2 years ago

my own GTX 1060 6GB, around 44 seconds for 64 epochs (final result for the set of 2 figures)

Golova1111 commented 2 years ago

You can speed up the algorithm by simply sending the smaller image, instead of 480x360 as now put 240x180, it is still enough to recreate the image with good quality and will reduce the time around 2 times

(the fine-tuned result of recreating demo image from resized 240x180 copy: img), tooks around 4.5 hours on my GPU, good enough results (img) are available in a half an hour

tugrul512bit commented 2 years ago

I think my CPU (fx8150 at 2.1GHz) makes some bottleneck with all python programs overall. Nice project. Have a nice day.