When I try to run inference using a checkpoint I created:
python .\inference.py -c .\config\default.yaml -p .\checkpoints\output\output_fastspeech_d7ef3cf_1k_steps.pyt --out output --text "ModuleList can be indexed like a regular Python list but modules it contains are properly registered."
I get the following error:
RuntimeError: Calculated padded input size per channel: (8). Kernel size: (9). Kernel size can't be greater than actual input size
I trained using the following setting in the Default.yaml file:
positionwise_conv_kernel_size : 9
When I attempt to train with positionwise_conv_kernel_size : 8 instead of 9, I get a training error. Any help would be appreciated.
When I try to run inference using a checkpoint I created:
python .\inference.py -c .\config\default.yaml -p .\checkpoints\output\output_fastspeech_d7ef3cf_1k_steps.pyt --out output --text "ModuleList can be indexed like a regular Python list but modules it contains are properly registered."
I get the following error:
RuntimeError: Calculated padded input size per channel: (8). Kernel size: (9). Kernel size can't be greater than actual input size
I trained using the following setting in the Default.yaml file:
positionwise_conv_kernel_size : 9
When I attempt to train with
positionwise_conv_kernel_size : 8
instead of 9, I get a training error. Any help would be appreciated.