Feature transformers have hard-coded rules for the dtypes of their outputs. This gets used in the layer builders to ensure that model inputs are aligned to transformer outputs.
103 would add support for quantization, which would produce different dtypes. More generally, we should make it possible to override the default dtypes where appropriate, in case a user wants to have float16 or float64 for some reason.
This could get a little dicey in the API if there are feature extractors which produce multiple data types (eg magnitude and phase information).
Description
Feature transformers have hard-coded rules for the dtypes of their outputs. This gets used in the layer builders to ensure that model inputs are aligned to transformer outputs.
103 would add support for quantization, which would produce different dtypes. More generally, we should make it possible to override the default dtypes where appropriate, in case a user wants to have
float16
orfloat64
for some reason.This could get a little dicey in the API if there are feature extractors which produce multiple data types (eg magnitude and phase information).