Closed cesarsalgado closed 7 months ago
Note that, for experimental purposes, all of these can be implemented by doing the padding as a separate layer and using VALID padding.
A work is in progress to add reflect and symmetric padding mode (as in numpy.pad).
@sungjinhwang What about pad with constant value other than zero? This would be useful, because I want to set the constant value to the average values of the input. It would also be useful to choose a constant value for each channel.
@cesarsalgado: You can pad with a constant other than zero by subtracting a constant from the input, padding with zero, and compensating in the output.
@girving That is clever =) Thanks!
Would it be possible to also support border replication? Something similar to https://github.com/torch/nn/blob/master/doc/convolution.md#nn.SpatialReplicationPadding
I think repeatedly calling tf.pad with symmetric/pad size 1 would work, but perhaps there could be a streamlined version?
Any progress in this?
please add other padding modes, as in numpy.pad
https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html
‘constant’
Pads with a constant value (different than zero).
‘mean’
Pads with the mean value of all or part of the vector along each axis.
‘wrap’
Pads with the wrap of the vector along the axis. The first values are used to pad the end and the end values are used to pad the beginning.
The last one correspond to the periodic boundary conditions, especially useful during physics calculation and for Convolutional Neural Networks with full symmetry on the images (example: Ising model).
Note that this is very different from tf.pad(t, paddings, "REFLECT")
, as can be seen in the following code:
import numpy
matrix = numpy.arange(25).reshape(5,5)
paddings = [[1,1],[1,1]]
# periodic boundary conditions
padded_matrix = numpy.pad(matrix, pad_width=paddings, mode='wrap')
import tensorflow as tf
session = tf.InteractiveSession()
M = tf.constant(matrix)
padded_M = tf.pad(M, paddings, mode="reflect")
padded_matrix == padded_M.eval()
# False
The 'CONSTANT' option in Tensorflow is quite confusing as it only supports 0 padding. If constant padding is not available, the option should be changed to 'ZERO'.
Please add other padding modes, as ‘wrap’ in numpy.pad. (as also requested by @FedericoMuciaccia ) Besides, could anyone give me a clue that how to implement 'wrap' padding using only the current tensorflow functions? Thanks in advance.
To implement the wrap padding (only for the width and height with size), I wrote the code as (hope I am right)
def wrap_pad(input, size):
M1 = tf.concat([input[:,:, -size:,:], input, input[:,:, 0:size,:]], 2)
M1 = tf.concat([M1[:,-size:, :,:], M1, M1[:,0:size, :,:]], 1)
return M1
@YuanmanLi here are two ways to possibly implement this. the first one is way simpler, faster and memory efficient:
import tensorflow as tf
def periodic_padding(image, padding=1):
'''
Create a periodic padding (wrap) around the image, to emulate periodic boundary conditions
'''
upper_pad = image[-padding:,:]
lower_pad = image[:padding,:]
partial_image = tf.concat([upper_pad, image, lower_pad], axis=0)
left_pad = partial_image[:,-padding:]
right_pad = partial_image[:,:padding]
padded_image = tf.concat([left_pad, partial_image, right_pad], axis=1)
return padded_image
# usage example:
session = tf.InteractiveSession()
image = tf.reshape(tf.range(30, dtype='float32'), shape=[5,6])
padded_image = periodic_padding(image, padding=2)
image.eval()
padded_image.eval()
the second one is more complex, and slower, because it requires a lot of calculations. but here the padding is done via matrix multiplication (with fixed auxiliary matrices), so in principle I think that this implementation could be used to exploit end-to-end differentiation
import tensorflow as tf
def periodic_padding(image, padding=1):
'''
Create a periodic padding (wrap) around the image, to emulate periodic boundary conditions
'''
rows, columns = image.shape
# create left matrix
left_corner_diagonal = tf.eye(padding)
left_filled_zeros = tf.zeros([padding,rows.value-padding])
left_upper = tf.concat([left_filled_zeros, left_corner_diagonal], axis=1)
left_center_diagonal = tf.eye(rows.value)
left_lower = tf.concat([left_corner_diagonal,left_filled_zeros], axis=1)
left_matrix = tf.concat([left_upper, left_center_diagonal, left_lower], axis=0)
# create right matrix
right_corner_diagonal = tf.eye(padding)
right_filled_zeros = tf.zeros([columns.value-padding,padding])
right_left_side = tf.concat([right_filled_zeros, right_corner_diagonal], axis=0)
right_center_diagonal = tf.eye(columns.value)
right_right_side = tf.concat([right_corner_diagonal,right_filled_zeros], axis=0)
right_matrix = tf.concat([right_left_side, right_center_diagonal, right_right_side], axis=1)
# left and right matrices are immutable
padded_image = tf.matmul(left_matrix, tf.matmul(image, right_matrix))
return padded_image
# usage example:
session = tf.InteractiveSession()
# left and right matrices are immutable, given a fixed image shape
image = tf.reshape(tf.range(30, dtype='float32'), shape=[5,6])
padded_image = periodic_padding(image, padding=2)
image.eval()
padded_image.eval()
is there any tensorflower willing to expand this sketch of code to implement the wrap padding in the next official release?
I am closing this issue now, as the original posters goals were to add more functionality, and we did. Specifically, we added the most commonly needed types (backed the op MirrorPad) several months ago. If the more exotic numpy.pad types of padding are needed, please file a new bug referring to this one. However, if somebody implements a new type they need and makes it perform well, we will accept the patch.
@aselle The original poster explicitly mentioned two types of padding: reflect and constant, and only MirrorPad is added, but not constant.
As I have mentioned, the current "CONSTANT" option is confusing, as it can only fill zeros rather than a selected constant. Although tricks like tf.pad(X - c) + c
can be used, the code will be obscure.
I don't think constant padding is "exotic" at all as it is a common practice in machine learning (appending a row of 1 to a matrix is very common).
@aselle The original poster also asked that the padding be added to the convolution operation. Would padding inside the convolution be more efficient and avoid unnecessary copy compared to tf.pad followed by convolution?
Constant padding with values other than 0 is already supported, so it looks like this issue can be closed.
@lubomir1 your question is answered at https://github.com/tensorflow/tensorflow/issues/18213#issuecomment-396736457
The original poster asked for the following choices of padding to be added, both to tf.pad and (!) convolution: ‘constant’, ‘edge’, ‘linear_ramp’, ‘maximum’, ‘mean’, ‘median’, ‘minimum’, ‘reflect’, ‘symmetric’, ‘wrap’, "function"
This functionality is essential to the usability of tensorflow for practical problems, e.g. solution of PDEs. There should be a way of specifying a broad range of boundary condition using padding (Neuman, Dirichlet, periodic, mixed, etc.)
Another upvote for 'wrap' padding from someone coming from a physics-simulation point of view.
The code snippet based on concatenation is pretty suboptimal for an application of this kind; which tends to work with simple filters, and everything is 100% memory bottlenecked. Any naive implementation that does not take into account the pecularities of gpu memory access is going to be far from optimal I suppose.
Indeed having a 'wrap' option baked into the convolutional operators at a low level would be ideal; but I suppose that this isnt so much up to tensorflow, as it is up to CUDNN to implement; and AFAIK it does not?
@FedericoMuciaccia
the second one is more complex, and slower, because it requires a lot of calculations. but here the padding is done via matrix multiplication (with fixed auxiliary matrices), so in principle I think that this implementation could be used to exploit end-to-end differentiation
Why would your first method using concatenation not work for differentiation? I just implemented this version and from what I can see, autodiff works fine.
I need to left-pad and right-pad with different values. I have no idea how to convince keras to do this. Any ideas?
Based on @YuanmanLi answer, here is a way to pad data periodically (which is a very common padding type in scientific computing) with a layer dedicated to it:
import tensorflow as tf
from keras import layers
from tensorflow.keras.layers import InputSpec
from tensorflow.python.keras.utils import conv_utils
class PeriodicPadding2D(layers.Layer):
def __init__(self, padding=1, **kwargs):
super(PeriodicPadding2D, self).__init__(**kwargs)
self.padding = conv_utils.normalize_tuple(padding, 1, 'padding')
self.input_spec = InputSpec(ndim=3)
def wrap_pad(self, input, size):
M1 = tf.concat([input[:,:, -size:], input, input[:,:, 0:size]], 2)
M1 = tf.concat([M1[:,-size:, :], M1, M1[:,0:size, :]], 1)
return M1
def compute_output_shape(self, input_shape):
shape = list(input_shape)
assert len(shape) == 3
if shape[1] is not None:
length = shape[1] + 2*self.padding[0]
else:
length = None
return tuple([shape[0], length, length])
def call(self, inputs):
return self.wrap_pad(inputs, self.padding[0])
def get_config(self):
config = {'padding': self.padding}
base_config = super(PeriodicPadding2D, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
Please note that this code is for my particular problem and I didn't test its generality yet. Please verify if it works for you.
I think that the scope of this ticket is too broad. Can we close this as some of the operations are now covered and open new issues for the missing ones?
This issue is stale because it has been open for 180 days with no activity. It will be closed if no further activity occurs. Thank you.
This issue was closed because it has been inactive for 1 year.
I would like to have other options of padding for tf.pad and convolution ops.
Some types that come to my mind right now: