Open wq9 opened 3 months ago
Hey @wq9
I think this should work when you add the batch dimension to the height and width inputs. So, assuming the batch size is 32 in your pipeline, the client code would look like:
width = np.ones((32, 1), dtype=np.int16)*640
height = np.ones((32, 1), dtype=np.int16)*360
@banasraf Adding the batch dimension worked. Thanks!
However, when the input is a video (video_raw = np.expand_dims(np.fromfile(FLAGS.video, dtype=np.uint8), axis=0)
), the last batch is not 32, so I get the error:
[/opt/dali/dali/pipeline/operator/operator.cc:43] Assert on "curr_batch_size == static_cast<decltype(curr_batch_size)>(arg.second.tvec->num_samples())" failed:
ArgumentInput has to have the same batch size as an input.
Is there a way to pad the batch dimension?
@wq9
Unfortunately this operator does not allow padding of the last batch. I don't see any workaround that would make your case work properly. The only options I see is hardcoding the width and height in the pipeline or if you know the number of frames in the sample, predicting when to send a partial width and height tensors.
I'll add a task to our backlog to extend the video input operator with the option to pad the last batch.
I'm trying to use a scalar input to resize a video, but can't figure out how to set the ndim parameter of external_source or the shape of the input in the client.
config.pbtxt
1/dali.py
client.py from video_decode_remap
If I run that, I get
unexpected shape for input 'HEIGHT' for model 'resize_224'. Expected [-1,-1], got [1]
. How do you properly set and get the scalar values in both client.py and dali.py?