princeton-vl / infinigen

Infinite Photorealistic Worlds using Procedural Generation
https://infinigen.org
BSD 3-Clause "New" or "Revised" License
5.36k stars 463 forks source link

How can I use GPU acceleration? #55

Closed luoluoluooo closed 1 year ago

luoluoluooo commented 1 year ago

My GPU is nvidia-RTX3090. I have already installed Cuda, and there is also a compilation package for Cuda when using install.sh. And also enabled on the command line to enable_GPU, but when I render, the speed is still very slow, and the program in nvidia-smi does not occupy graphics memory.

image

image

David-Yan1 commented 1 year ago

enable_gpu really means enable cuda-accelerated terrain meshing and will be renamed. There is supposed to be GPU-accelerated rendering in the short step regardless of whether this flag is passed

luoluoluooo commented 1 year ago

ok,i got it,not all processes have GPU acceleration

bssrdf commented 1 year ago

Actually the render part of the code enabled GPUs, regardless of whether enable_gpu flag is passed.

# render/render.py
def enable_gpu(engine_name = 'CYCLES'):
    # from: https://github.com/DLR-RM/BlenderProc/blob/main/blenderproc/python/utility/Initializer.py
    compute_device_type = None
    prefs = bpy.context.preferences.addons['cycles'].preferences
    # Use cycles
    bpy.context.scene.render.engine = engine_name
    bpy.context.scene.cycles.device = 'GPU'

    preferences = bpy.context.preferences.addons['cycles'].preferences
    for device_type in preferences.get_device_types(bpy.context):
        preferences.get_devices_for_type(device_type[0])

    for gpu_type in ['OPTIX', 'CUDA']:#, 'METAL']:
        found = False
        for device in preferences.devices:
            if device.type == gpu_type and (compute_device_type is None or compute_device_type == gpu_type):
                bpy.context.preferences.addons['cycles'].preferences.compute_device_type = gpu_type
                logger.info('Device {} of type {} found and used.'.format(device.name, device.type))
                found = True
                break
        if found:
            break

    # make sure that all visible GPUs are used
    for device in prefs.devices:   
            device. Use = True

    return prefs.devices

def render_image(
    camera_id,
    min_samples,
    num_samples,
    time_limit,
    frames_folder,
    adaptive_threshold,
    exposure,
    passes_to_save,
    flat_shading,
    use_dof=False,
    dof_aperture_fstop=2.8,
    motion_blur=False,
    motion_blur_shutter=0.5,
    render_resolution_override=None,
    excludes=[],
):
    tic = time.time()

    camera_rig_id, subcam_id = camera_id

    for exclude in excludes:
        bpy.data.objects[exclude].hide_render = True

    with Timer(f"Enable GPU"):
        devices = enable_gpu()

If I use tools/manage_datagen_jobs.py to generate images, GPU/CUDA is not being used in rendering step, even though the log file indicates GPU was used:

[00:08:01.074] [times] [INFO] | [Enable GPU]
[00:08:01.314] [rendering.render] [INFO] | Device NVIDIA GeForce GTX 1070 of type CUDA found and used.
[00:08:01.314] [rendering.render] [INFO] | Device NVIDIA GeForce GTX 1070 of type CUDA found and used = True.
[00:08:01.314] [rendering.render] [INFO] | Device Intel Core i7-6700 CPU @ 3.40GHz of type CPU found and used = False.
[00:08:01.314] [times] [INFO] | [Enable GPU] finished in 0:00:00.240083

However, if I manually execute render step as listed inrun_pipeline.sh, e.g.

nice -n 20 $BLENDER --background -y -noaudio --python generate.py -- --input_folder outputs/seaice6/0/fine_0_0_0048_0 --output_folder outputs/seaice6/0/frames_0_0_0048_0 --seed 0 --task render --task_uniqname short_0_0_0048_0 -g arctic intermediate -p render.render_image_func=@full/render_image LOG_DIR='outputs/seaice6/0/logs' execute_tasks.frame_range=[48,48] execute_tasks.camera_id=[0,0] execute_tasks.resample_idx=0

blender will use GPU and rendering is significantly faster (25 min on GTX 1070 vs. 4 hours on Intel i7 6700 3.4G) I don't know why tools/manage_datagen_jobs.py is not working for GPUs. What is difference between running tools/manage_datagen_jobs.py and directly executing commands in run_pipeline.sh?

Update

I finally figured out why tools/manage_datagen_jobs.py is not using GPUs for rendering.

The culprit is local_16GB.gin has this line LocalScheduleHandler.use_gpu=False. This will turn off GPUs.

So if you, like me, want to do all other steps in CPU but only rendering on GPU, do this:

LocalScheduleHandler.use_gpu=True

For my machine which is very old, this is the only way to get a decent performance out of it. In particular, cycles render is very slow with CPU only. But switching to CUDA really made a difference, even with a generations old GTX1070.

If you have beefy GPUs (3090/4090), turn on enable_gpu to accelerate terrain generation as well.

Thanks to @badgids for LocalScheduleHandler.use_gpu=True tip.

WellTung666 commented 1 year ago

My GPU is nvidia-RTX3090. I have already installed Cuda, and there is also a compilation package for Cuda when using install.sh. And also enabled on the command line to enable_GPU, but when I render, the speed is still very slow, and the program in nvidia-smi does not occupy graphics memory.

image

image

I also have the same problem,how can i solved it.

luoluoluooo commented 1 year ago

My GPU is nvidia-RTX3090. I have already installed Cuda, and there is also a compilation package for Cuda when using install.sh. And also enabled on the command line to enable_GPU, but when I render, the speed is still very slow, and the program in nvidia-smi does not occupy graphics memory. image image

I also have the same problem,how can i solved it.

Perhaps this is not a problem. In some stages, the program will use GPU acceleration, in other stage,the program is not。

araistrick commented 1 year ago

You should expect to see GPU usage briefly during the fine_terrain stage, and for a decent duration during any rendering stage. 0 GPU usage during coarse/populate stages is expected and typical. Confusion RE LocalScheduleHandler.use_gpu will be cleared up via PR