Open masoumeh1 opened 7 years ago
I have the same error. According to https://github.com/pender/chatbot-rnn/issues/6, variable names changed with tensorflow 1.0. Not sure how to solve it myself, would be very interested if you found a solution.
I train from scratch and not used pretrained model. so the problem is solved. restore_weights = False
Also I changed the version of saver to V1 by writing this: from tensorflow.core.protobuf import saver_pb2 self.saver = tf.train.Saver(write_version = saver_pb2.SaverDef.V1)
thanks for the direction. I have tried to do as you suggested, and have gotten further. However, some YOLO output (should be located in a benchmark folder), which I cannot find in the repo, is required in the method testing. Did you run into this as well?
Never mind, I found the output, it is in the DATA.zip file. I am leaving the comment here if anyone else stumbles on this.
Yes. yolo_out is available in the the data folder.
Also you can run ROLO/3rd party/YOLO_network.py for your own data.
@masoumeh1 I am also facing a similar issue , can you please help me with that ,
File "./experiments/testing/ROLO_network_test_all.py", line 1232, in testing
batch_xs = self.rolo_utils.load_yolo_output_test(x_path, self.batch_size, self.num_steps, id) # [num_of_examples, num_input] (depth == 1)
File "./experiments/testing/ROLO_network_test_all.py", line 350, in load_yolo_output_test
paths = [os.path.join(fold,fn) for fn in next(os.walk(fold))[2]]
StopIteration
*** Error in `python': free(): invalid pointer: 0x0000000001f384d0 ***
Aborted (core dumped)
@rishabh135 It is most likely the yolo_out
folder doesn't exist for the sequence os
is looking for.
@suryaprakaz Hey can you tell me what should I put int the yolo_out folder ?
@rishabh135 instead of using : paths = [os.path.join(fold,fn) for fn in next(os.walk(fold))[2]] use this: paths = [] i = 0 for (dirpath, dirnames, filenames) in os.walk(fold): i = i + 1; if i==1 : for fn in filenames : paths.append(os.path.join(dirpath,fn))
@masoumeh1 Do I have to manually take my video , and run yolo on it and put that output of frames with bounding boxes in yolo_out ?
Yes, If you want to use your own data, you have to produce yolo_out by runing YOLO_network.py from: https://github.com/Guanghan/ROLO/blob/master/3rd%20party/YOLO_network.py
@masoumeh1 Thanks , yes that is what I gather too , can you give me a bit of idea , how do I go about that getting its output on a test video "test.mp4" from Yolo_network.py ? Thanks a lot :)
You need to have the frames of the video and as well as the ground truth file as input of YOLO_network.py
@masoumeh1 What do you mean by ground truth file , I have a sequence of 780 images , labelled as frame%5d.png , which I ran on darknet to get bounding boxes on each of them , so what command line argument should I provide yolo_network.py ?
If you want to use the vanilla ROLO network, you must have a dataset curated to suit the needs of the default code. Take a look at the code 3rd Party/YOLO_network.py
. As of now, it separates out the target of interest from others with the use of GT values. In other words, it only does single object tracking so far. (Although MOLO might shed some light on that)
Therefore, you might have to supply GT values for your own video even for evaluation. This is one of the disadvantages in the YOLO detector part. You can try to overcome this with your own logic.
If you are using this for experimentation, I'd recommend using the original OTB-30
@masoumeh1 Do I have to manually take my video , and run yolo on it and put that output of frames with bounding boxes in yolo_out ?
Yes, you can also do that. But make sure you are dumping the feature maps (prior to last fully connected layer vector) and detection results of a single object
@suryaprakaz How do I provide it with GT values , I have a sequence of images that have bounding boxes on them , how do I explicitly provide it for evaluation ?
Simply follow the sample groundtruth format as found in http://guanghan.info/projects/ROLO/DATA/DATA.zip
@suryaprakaz It seems that gt also used for test process. do think it is reasonable? I think It should not be used in the test. because in real test we don't have access to gt. Can you explain this problem for me?
It unreasonable and rhetoric. Asking for GT during testing - and if not used for accuracy calculation - would bring on the wrath of any machine learning scientist. Although, due to the nature of the tracking problem, it becomes indispensable to have a clean sequence of detection results to achieve performance. If you peer into the results of YOLO, you could easily see that is not the case. I believe LSTMs are capable of deciding to remember/forget properties about the target as it is claimed in the paper. And I reckon just to make a proof of concept, Guanghan came up with this simple yet faulty mechanism to separate out the target. With a bit inspection, I think it is possible for us to get around this issue.
@suryaprakaz What do you mean by using "faulty mechanism"? using not clean Yolo results or using GT in test? And what do you mean by using "separate out the target"?
Using GT in test is 'faulty'. Among all YOLO detections, it chooses one detection (target) from all frames to track by IoU. This is 'separating out the target'
@suryaprakaz Could I have your email? my email is mpg@kth.se could you please send me an email?
@suryaprakaz Do you know how can I produce OPE, SRE, TRE for my own data in the evaluation process?
Hi , I have this error when I want to train step 6 of exp1 or exp2: NotFoundError: Tensor name "rnn/lstm_cell/weights" not found in checkpoint files C:/My files/ROLO/ROLO-master/experiments/training/model_step6_exp2.ckpt
Do you have any similar problem and how do you solve it?