Closed BruceLeeeee closed 5 years ago
I had same problem. Did you solve the problem and how? Thanks~
can you describe in a more detailed way? what did you change from original code?
Did you train model on COCO and tried to test on other dataset?
@bhyzhao @mks0601 I only changed Config.dataset
to COCO
. I think tf.scatter_nd
in render_onehot_headmap()
doesn't check indices for out-of-bounds induces the error. If I clip the value of target_coord
, it works.
which version of TF are you using? according to my experience and doc (https://www.tensorflow.org/api_docs/python/tf/scatter_nd), in case of GPU, out of box indices are ignored.
I have tested on tensorfow==1.12
tensorflow==1.14
and tensorFlow-gpu==1.14
, and all have the same error. According to the doc, it should works, but I don't know why.
I also used 1.12 when implementing PoseFix. That is weird.. Can you tell me how did you clip the coordinates?
I am not sure if it is correct, I think those out-of-bounds points are invalid, so it would not affect loss, right? `
def render_onehot_heatmap(self, coord, output_shape):
batch_size = tf.shape(coord)[0]
x = tf.reshape(coord[:,:,0] / cfg.input_shape[1] * output_shape[1],[-1])
y = tf.reshape(coord[:,:,1] / cfg.input_shape[0] * output_shape[0],[-1])
x_floor = tf.floor(x)
y_floor = tf.floor(y)
x_floor = tf.clip_by_value(x_floor, 0, output_shape[1] - 2) # fix out-of-bounds x
y_floor = tf.clip_by_value(y_floor, 0, output_shape[0] - 2) # fix out-of-bounds y
indices_batch = tf.expand_dims(tf.to_float(\
tf.reshape(
tf.transpose(\
tf.tile(\
tf.expand_dims(tf.range(batch_size),0)\
,[cfg.num_kps,1])\
,[1,0])\
,[-1])),1)
indices_batch = tf.concat([indices_batch, indices_batch, indices_batch, indices_batch], axis=0)
indices_joint = tf.to_float(tf.expand_dims(tf.tile(tf.range(cfg.num_kps),[batch_size]),1))
indices_joint = tf.concat([indices_joint, indices_joint, indices_joint, indices_joint], axis=0)
indices_lt = tf.concat([tf.expand_dims(y_floor,1), tf.expand_dims(x_floor,1)], axis=1)
indices_lb = tf.concat([tf.expand_dims(y_floor+1,1), tf.expand_dims(x_floor,1)], axis=1)
indices_rt = tf.concat([tf.expand_dims(y_floor,1), tf.expand_dims(x_floor+1,1)], axis=1)
indices_rb = tf.concat([tf.expand_dims(y_floor+1,1), tf.expand_dims(x_floor+1,1)], axis=1)
indices = tf.concat([indices_lt, indices_lb, indices_rt, indices_rb], axis=0)
indices = tf.cast(tf.concat([indices_batch, indices, indices_joint], axis=1),tf.int32)
prob_lt = (1 - (x - x_floor)) * (1 - (y - y_floor))
prob_lb = (1 - (x - x_floor)) * (y - y_floor)
prob_rt = (x - x_floor) * (1 - (y - y_floor))
prob_rb = (x - x_floor) * (y - y_floor)
probs = tf.concat([prob_lt, prob_lb, prob_rt, prob_rb], axis=0)
heatmap = tf.scatter_nd(indices, probs, (batch_size, *output_shape, cfg.num_kps))
normalizer = tf.reshape(tf.reduce_sum(heatmap,axis=[1,2]),[batch_size,1,1,cfg.num_kps])
normalizer = tf.where(tf.equal(normalizer,0),tf.ones_like(normalizer),normalizer)
heatmap = heatmap / normalizer
return heatmap
`
Yes they would not effect loss because there is also target_valid
Hi, Thanks for your work. I tried to train on coco dataset and only changed dataset in default config, but I encountered the error as follow:
Thanks for your time.