For the 2-D wavelet transforms (both DB-2 and Haar), tf.nn.conv2d was complaining about the number of input dimensions. Both wavelet transforms call tf.nn.conv1d, which adds a dimension to the input tensor and then calls tf.nn.conv2d. For 2-D wavelet transform the input tensor is already 4-D (batch x row x column x channel), so when tf.nn.conv1d adds the extra dimension, the tensor becomes 5-D. tf.nn.conv2d expects 4-D tensors, so it complains about receiving a 5-D tensor.
For the 2-D wavelet transforms (both DB-2 and Haar), tf.nn.conv2d was complaining about the number of input dimensions. Both wavelet transforms call tf.nn.conv1d, which adds a dimension to the input tensor and then calls tf.nn.conv2d. For 2-D wavelet transform the input tensor is already 4-D (batch x row x column x channel), so when tf.nn.conv1d adds the extra dimension, the tensor becomes 5-D. tf.nn.conv2d expects 4-D tensors, so it complains about receiving a 5-D tensor.
I think my reshaping fixed the problem.