VisualComputingInstitute / triplet-reid

Code for reproducing the results of our "In Defense of the Triplet Loss for Person Re-Identification" paper.
https://arxiv.org/abs/1703.07737
MIT License
763 stars 215 forks source link

Embed a single image #81

Open lamhoangtung opened 5 years ago

lamhoangtung commented 5 years ago

Hi, I'm trying to write a script to embed a single image based on your code, it's look something like this:

import json
import os
from importlib import import_module

import cv2
import tensorflow as tf
import numpy as np

sess = tf.Session()

# Read config
config = json.loads(open(os.path.join(
    '<exp_root>', 'args.json'), 'r').read())

# Input img
net_input_size = (
    config['net_input_height'], config['net_input_width'])
img = tf.placeholder(tf.float32, (None, net_input_size[0], net_input_size[1], 3))

# Create the model and an embedding head.
model = import_module('nets.' + config['model_name'])
head = import_module('heads.' + config['head_name'])

endpoints, _ = model.endpoints(img, is_training=False)
with tf.name_scope('head'):
    endpoints = head.head(endpoints, config['embedding_dim'], is_training=False)

# Initialize the network/load the checkpoint.
checkpoint = tf.train.latest_checkpoint(config['experiment_root'])
print('Restoring from checkpoint: {}'.format(checkpoint))
tf.train.Saver().restore(sess, checkpoint)

raw_img = cv2.imread('<img>')
raw_img = cv2.resize(raw_img, net_input_size)
raw_img = np.swapaxes(raw_img, 0, 1)
raw_img = np.expand_dims(raw_img, axis=0)

emb = sess.run(endpoints['emb'],  feed_dict={img: raw_img})[0]

But the result for a same image with my code and your code are not the same.

Note that there is no any augmentation added when I compute the embedding vector.

Am I missing anything here? Thanks you for the help

lamhoangtung commented 5 years ago

Quick update, I've just found out that you guys used tf.image.decode_jpeg and tf.image.resize_images instead of OpenCV, I switched to it, the output result is different but still not the same as your code.

Am I missing something like normalization ?? Here is what I've changed:

path = tf.placeholder(tf.string)
image_encoded = tf.read_file(path)
image_decoded = tf.image.decode_jpeg(image_encoded, channels=3)
image_resized = tf.image.resize_images(image_decoded, net_input_size)
img = tf.expand_dims(image_resized, axis=0)

Thanks ;)

Pandoro commented 5 years ago

The only thing that comes to mind right now, is that by the default we use test time augmentation, which you don't. But that depends on how you are using our embed script to create comparable embeddings in this'll case.

On Mon, Jun 3, 2019, 10:20 Hoàng Tùng Lâm (Linus) notifications@github.com wrote:

Quick update, I've just found out that you guys used tf.image.decode_jpeg and tf.image.resize_images instead of OpenCV, I switched to it, the output result is different but still not the same as your code.

Am I missing something like normalization ?? Here is what I've changed:

path = tf.placeholder(tf.string) image_encoded = tf.read_file(path) image_decoded = tf.image.decode_jpeg(image_encoded, channels=3) image_resized = tf.image.resize_images(image_decoded, net_input_size) img = tf.expand_dims(image_resized, axis=0)

Thanks ;)

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/VisualComputingInstitute/triplet-reid/issues/81?email_source=notifications&email_token=AAOJDTKVVJNQMXNIJVCBXJLPYTH5ZA5CNFSM4HSFMPL2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWYVD7I#issuecomment-498160125, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOJDTKPBI5V64CRLIFKBADPYTH5ZANCNFSM4HSFMPLQ .

lamhoangtung commented 5 years ago

Hi @Pandoro, Thanks for the quick response. This is what I use to compute the embedding vector:

python3 embed.py \
    --experiment_root ... \
    --dataset ... \
    --filename ...

I extracted the vector from the .h5 file.

Anyways, how can I do TTA in my case? Are there any code in your repo I can reference?

Pandoro commented 5 years ago

If you use it like that, it should actually not be doing any test time augmentation, so that shouldn't be it either. The code to do so is included in embed.py. The only thing that comes to mind is that maybe something goes wrong during extracting of the embedding? Have you tried creating a csv file only containing the one image you want to embed?

On Mon, Jun 3, 2019, 12:18 Hoàng Tùng Lâm (Linus) notifications@github.com wrote:

Hi @Pandoro https://github.com/Pandoro, Thanks for the quick response. This is what I use to compute the embedding vector:

python3 embed.py \ --experiment_root ... \ --dataset ... \ --filename ...

I extracted the vector from the .h5 file.

Anyways, how can I do TTA in my case? Are there in code in your repo I can reference?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/VisualComputingInstitute/triplet-reid/issues/81?email_source=notifications&email_token=AAOJDTJH7MYEI2SRQ56BOZDPYTVZBA5CNFSM4HSFMPL2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWY6Y2A#issuecomment-498199656, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOJDTMMDIW4NTMPXFPC533PYTVZBANCNFSM4HSFMPLQ .

lamhoangtung commented 5 years ago

Hi. I did an experiments with a csv file contain only the image that I want to embed and found something really strange. Actually there might be nothing wrong with you guys’s embed code and my inference code.

Pandoro commented 5 years ago

I haven't seen this before. I wouldn't be surprised if there are tiny differences, but we frequently used CPUs to embed and evaluate stuff when all GPUs were busy and that worked fine. So something seems to be wrong. Are you using the same tensorflow version for both CPU and GPU?

On Mon, Jun 3, 2019, 13:40 Hoàng Tùng Lâm (Linus) notifications@github.com wrote:

Hi. I did an experiments with a csv file contain only the image that I want to embed and found something really strange. Actually there might be nothing wrong with you guys’s embed code and my inference code.

  • The h5 output file that I previously use for comparison was created on a remote server with GPU enabled.
  • My inference code was run on my local machine which only have CPU. After I try to compute everything again only on my CPU, I found that there are a big difference on the embedded vector computed by GPU vs CPU. (My code and yours produce exactly the same results)
  • Note that the difference are HUGE, like completely different. I did double check the model, code and input images for the experiment Have you ever seen something like this ? Am I wrong at some point ?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/VisualComputingInstitute/triplet-reid/issues/81?email_source=notifications&email_token=AAOJDTK52KRGKK5T7QGG75LPYT7KDA5CNFSM4HSFMPL2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWZEJTQ#issuecomment-498222286, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOJDTP6DNWMJRNBOUJG3OLPYT7KDANCNFSM4HSFMPLQ .

lamhoangtung commented 5 years ago

@Pandoro Same tensorflow 1.12.0 on both machine

lamhoangtung commented 5 years ago

Some update on this. I tried to redo everything, even training and here is the result:

Where can I potentially be wrong ?. Here is how I extract the vector out of the h5 file:

import h5py
import numpy as np

raw_embedding = h5py.File('....h5', 'r')
raw_label = pd.read_csv('...csv')

def load_data():
    features = raw_embedding['emb'].value
    labels = list(raw_label.iloc[:, 1])
    return (features, labels)

vecs, imgs = load_data()
print(vecs[0], imgs[0])

Thanks for your help @Pandoro

lamhoangtung commented 5 years ago

Note: I tried a bunch of different images, so the problem is not related to the first sample of the dataset only. => Question: Did you guys do any datasets level normalization ?

Pandoro commented 5 years ago

I can't say that this sounds like anything I've seen before. If I get it right, GPU and CPU results are now the same, but it depends on if you have several other images in your batch or just one specific one?

It sounds like something might be going wrong with the batch normalization, but your script you clearly sets is_training=False. We don't do any other normalization, so I honestly have no idea where this could be coming from.

On Tue, Jun 4, 2019 at 5:28 AM Hoàng Tùng Lâm (Linus) < notifications@github.com> wrote:

Note: I tried a bunch of different images, so the problem is not related to the first sample of the dataset only. => Question: Did you guys do any datasets level normalization ?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/VisualComputingInstitute/triplet-reid/issues/81?email_source=notifications&email_token=AAOJDTKBVA6VCMYDLA3ZBNLPYXON7A5CNFSM4HSFMPL2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODW3JVDY#issuecomment-498506383, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOJDTLAXHIJDN24RJTKVCTPYXON7ANCNFSM4HSFMPLQ .

lamhoangtung commented 5 years ago

So which one should I use ? Which one is more accurate ? Should I create a fake batches ? Or should I keep the batch_size = 1 when inference ?

Pandoro commented 5 years ago

There is no useful answer to that question. What you are seeing shouldn't be happening. Currently I don't have time to investigate if this is an issue with our code, but I highly doubt it since we haven't seen any such issues so far.

As it is right now, your setup seems to be somehow broken and thus there is no "more accurate".

What you could do is to try and download our pretrained model and run the evaluation on Market-1501 to see if you can recreate our original scores. If you get a different score, something else is broken.

On Tue, Jun 4, 2019 at 6:32 PM Hoàng Tùng Lâm (Linus) < notifications@github.com> wrote:

So which one should I use ? Which one is more accurate ? Should I create a fake batches ? Or should I keep the batch_size = 1 when inference ?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/VisualComputingInstitute/triplet-reid/issues/81?email_source=notifications&email_token=AAOJDTNSIL7AGOPJY5HQQEDPY2KK3A5CNFSM4HSFMPL2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODW5ENZQ#issuecomment-498747110, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOJDTOUV42FIPSDFF6SUYTPY2KK3ANCNFSM4HSFMPLQ .

mazatov commented 4 years ago

@lamhoangtung , Were you able to figure this out? I'm trying to follow your steps to generate embeddings and compare them. But so far I'm running into some errors:

I cannot load the model this way for some reason. #85

checkpoint = tf.train.latest_checkpoint(config['experiment_root'])

I tried loading the model this way,

saver = tf.train.import_meta_graph('experiments\my_experiment\checkpoint-25000.meta')
saver.restore(sess, 'experiments\my_experiment\checkpoint-25000')

but that still gives me an error when I try to run
emb = sess.run(endpoints['emb'], feed_dict={img: raw_img})[0]

FailedPreconditionError (see above for traceback): Attempting to use uninitialized value resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/BatchNorm/gamma
     [[node resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/BatchNorm/gamma/read (defined at C:\Users\mazat\Documents\Python\trinet\nets\resnet_v1.py:118) ]]
     [[node head/emb/BiasAdd (defined at C:\Users\mazat\Documents\Python\trinet\heads\fc1024.py:17) ]]

Thanks

mazatov commented 4 years ago

@lamhoangtung I think I figured out the first problem.

  1. cv2 loads the image in BGR style so you need to convert it to RGB.
  2. There seem to be some differences in the way cv2 and tensorflow load jpeg image. Check https://stackoverflow.com/questions/45516859/differences-between-cv2-image-processing-and-tf-image-processing

So, to get cv2 load embeddings close to the embed.py values, I did the following.

raw_img = cv2.imread(os.path.join(config['image_root'],'query', '0001_c1s1_001051_00.jpg'))
raw_img = cv2.cvtColor(raw_img, cv2.COLOR_BGR2RGB)
raw_img = cv2.resize(raw_img, (net_input_size[1], net_input_size[0]))
raw_img = np.expand_dims(raw_img, axis=0)

If you want to get the exactly same values you can load the image with TF instead of CV2

image_encoded = tf.read_file(os.path.join(config['image_root'],'query', '0001_c1s1_001051_00.jpg'))
image_decoded = tf.image.decode_jpeg(image_encoded, channels=3)
image_resized = tf.image.resize_images(image_decoded, net_input_size)
img = tf.expand_dims(image_resized, axis=0)

# Create the model and an embedding head.
model = import_module('nets.' + config['model_name'])
head = import_module('heads.' + config['head_name'])

endpoints, _ = model.endpoints(img, is_training=False)
with tf.name_scope('head'):
    endpoints = head.head(endpoints, config['embedding_dim'], is_training=False)

tf.train.Saver().restore(sess, os.path.join(config['experiment_root'],'checkpoint-25000') )

emb = sess.run(endpoints['emb'])[0]

I got almost identical embeddings this way