NVIDIA / DALI

A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep learning training and inference applications.
https://docs.nvidia.com/deeplearning/dali/user-guide/docs/index.html
Apache License 2.0
5.06k stars 615 forks source link

How can convert a TensorListGPU to TensorListCPU #5619

Open YJonmo opened 1 week ago

YJonmo commented 1 week ago

Describe the question.

Hi there,

I am new to DALI.

I am trying to decide on the window_size based on the content of the image and then apply Gaussian Blur. But the fn.gaussian_blur does not accept the GPU input and masks is a GPU TensorList.

            nonzero_ratio = fn.reductions.mean(fn.cast(masks > 0, dtype=types.FLOAT), axes=(0, 1))

            # Calculate the blur size based on the ratio
            blur_size = fn.cast(16 + nonzero_ratio * 300, dtype=types.INT32)
            # Apply Gaussian blur
            masks = fn.gaussian_blur(masks, window_size=[blur_size])

TypeError: RunOperatorGPU(): incompatible function arguments. The following argument types are supported:
    1. (self: nvidia.dali.backend_impl.PipelineDebug, arg0: int, arg1: List[nvidia.dali.tensors.TensorListGPU], arg2: Dict[str, nvidia.dali.tensors.TensorListCPU], arg3: int) -> List[nvidia.dali.tensors.TensorListGPU]

Help please.

Check for duplicates

klecki commented 1 week ago

Hi @YJonmo, you need to pass the blur_size without wrapping it in a Python list:

masks = fn.gaussian_blur(masks, window_size=blur_size)

Also all named arguments (like the window_size) need to be located on CPU, as described in our doc: https://docs.nvidia.com/deeplearning/dali/main-user-guide/docs/pipeline.html#id2 Such DataNode needs to be prepared on CPU, as you can't move back from GPU to CPU - we are currently working on lifting this limitation.