Open borasy opened 7 years ago
There are 2 methods of doing that. You can split the data set in train and validate sets inside the code or just send 2 separate data sets, one for train and one for validate when you call the flow module.
Anyway, in order to do that you should add some new parameters in default.py file, then modify the functions _batch, parse and shuffle from data.py (both yolo and yolov2 folders) and modify the method train() in flow.py file(here you only have to run another batch (every iteration or once a number of iterations) using the same tensorflow session, but without returning the train_op so you don't modify the weights). You can also add another tf.summary.FileWriter for validation so you can visualize your validation loss graph using tensorboard.
I personally chose to send 2 different data sets . It was pretty straight forward. I hope I was clear enough.
@Costyv95 Can you share your code with the added parameters and the changes that you have suggested ?
Yes, no problem. I will upload the files here. If you have any question , just ask.
@Costyv95 Does the validation set contribute to the gradient update in your implementation?
I got it, validation samples does not contribute to gradient update.
Yes, validation is only for a preview of the model results outside the training set.
Hi, Sorry. I didn't notice the last mail. Yes, validation is only for a preview of the model results outside the training set.
On Tuesday, July 4, 2017, 3:16:24 PM GMT+3, yfliu notifications@github.com wrote:
I got it, validation samples does not contribute to gradient update.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
@Costyv95 I just want konw how to run it after modify the original code? Thanks very much!
@Costyv95 I run it like this ./flow --model cfg/yolo.cfg --train --dataset "/home/thinkjoy/lwl/modify-darkflow-master/data/VOCdevkit/VOC2007/JPEGImages" --annotation "/home/thinkjoy/lwl/modify-darkflow-master/data/VOCdevkit/VOC2007/Annotations" --gpu 1.0 here is error(modify the code as the same as you) File "/home/thinkjoy/lwl/modify-darkflow-master/darkflow/net/flow.py", line 82, in train feed_dict[self.learning_rate] = lr AttributeError: 'TFNet' object has no attribute 'learning_rate'
This happens because the code I gave you has some modifications for adaptive learning rate and there is one more change you have to do . You find it here: https://github.com/thtrieu/darkflow/pull/216/commits/124d55d32d17bdee111201fd6fe520db709a4f9c
And you should add --val_dataset and val_annotation to arguments for having a validation loss.
@Costyv95 Can we control that how much steps to validate one time, I just think one step one val is a little waste time for training? Thanks!
@Costyv95 And have you achieved adding the accuracy when val?
@dream-will For validation once in N steps, you can easily add an argument(val_steps) in defaults.py and in the train method in flow.py you just run the code that's after the "#validation time" inside an if statement like this:
` #validation time
if i % self.FLAGS.val_steps == 0:
(x_batch, datum) = next(val_batches)
feed_dict = {
loss_ph[key]: datum[key]
for key in loss_ph }
feed_dict[self.inp] = x_batch
feed_dict.update(self.feed)
feed_dict[self.learning_rate] = lr
fetches = [loss_op, self.summary_op]
fetched = self.sess.run(fetches, feed_dict)
loss = fetched[0]
if loss_mva_valid is None: loss_mva_valid = loss
loss_mva_valid = .9 * loss_mva_valid + .1 * loss
self.val_writer.add_summary(fetched[1], step_now)
form = 'VALIDATION step {} - loss {} - moving ave loss {}'
self.say(form.format(step_now, loss, loss_mva_valid))`
In the defaults.py just add this line:
self.define('val_steps', '1', 'evaluate validation loss every #val_steps iterations')
I don't quite get the second question about adding the accuracy.
@Costyv95 Thanks for your answer,the second question just mean when validate ,we not only get the validation loss but also get the validation accuracy?
@dream-will For that you have to implement yourself a custom accuracy method that compares the GT bboxes and the predicted bboxes (to get the predicted bboxes, see the code used in prediction) , but I don't see a reason for that because the loss is enough . Be aware that the validation you see is only on a random mini batch from the validation set, but this represent very well the testing loss on a big enough validation dataset.
@Costyv95 ok,thanks
Hi @Costyv95 . I'm having problem to output val loss values. I modified all files by following your instructions and codes. This is the following errors
File "flow", line 6, in <module> cliHandler(sys.argv) File "/home/alxe/ML/darkflow/darkflow/cli.py", line 26, in cliHandler tfnet = TFNet(FLAGS) File "/home/alxe/ML/darkflow/darkflow/net/build.py", line 64, in __init__ self.framework = create_framework(*args) File "/home/alxe/ML/darkflow/darkflow/net/framework.py", line 59, in create_framework return this(meta, FLAGS) File "/home/alxe/ML/darkflow/darkflow/net/framework.py", line 15, in __init__ self.constructor(meta, FLAGS) File "/home/alxe/ML/darkflow/darkflow/net/yolo/__init__.py", line 20, in constructor misc.labels(meta, FLAGS) #We're not loading from a .pb so we do need to load the labels File "/home/alxe/ML/darkflow/darkflow/net/yolo/misc.py", line 36, in labels with open(file, 'r') as f: TypeError: coercing to Unicode: need string or buffer, NoneType found
Can you print the value of file variable ?
@Costyv95 no I can't. This is what I run:
python flow --model cfg/tiny-yolo-voc-1c.cfg --train --dataset train/images --annotation train/annotations --load bin/yolo.weights --gpu 1.0 --epoch 300
What I meant by the "file variabile" is the variabile used at line 36 in misc.py, because I cannot really understand what's wrong with your code. You don't have any --val_dataset argument? How you implemented the change ? You split the dataset inside the code or you added the --val_dataset argument?
@Costyv95 Hi, I have copy and paste your files on diff.zip then i tried to train with command
"flow --train --model ./coke/yolo-coke-2c.cfg --annotation ./coke/train/annotations --dataset ./coke/train/images --gpu 1.0 --batch 8 --save 1000 --val_dataset ./coke/validation/images --val_annotation ./coke/validation/annotations
But it still got error ` [nkhanh@localhost khanh]$ ./run_coke.sh
Parsing ./coke/yolo-coke-2c.cfg
Loading None ...
Finished in 0.0001392364501953125s
Traceback (most recent call last):
File "/usr/local/bin/flow", line 6, in
@khanh1412 in misc.py line29 change it to your custom labels file. file = 'labels.txt'
P.S. It's a temporary solution.
@Costyv95 how to understand remove '[' and ']'?
Enter training ...
Traceback (most recent call last):
File "flow", line 6, in
hi @Costyv95
where should i add tf.summary.FileWriter for validation to visualize validation loss graph using tensorboard.
thanks
@Costyv95 I tried your zip file, diff.zip. But the terminal tells me that --val_dataset is an invalid argument. Do I need to change other files?
@Costyv95 I tried your zip file, diff.zip. But the terminal tells me that --val_dataset is an invalid argument. Do I need to change other files?
You should replace all the files including yolo-data and yolov2-data ones. you should simply copy and paste respectively to the related folders by changing their names by just "data" to simply change the file in them.
@khanhhh add a code “self.define('labels', 'labels.txt', 'path to labels file')” to the "def setDefaults(self):" in "darkflow\defaults.py", then you can use "--labels xxx.txt" as former.
Thank you so much!!!!!!!!
On May 25, 2019, at 3:11 AM, Jack notifications@github.com wrote:
@KhanhHH https://github.com/KhanhHH add a code “self.define('labels', 'labels.txt', 'path to labels file')” to the "def setDefaults(self):" in "darkflow\defaults.py", then you can use "--labels xxx.txt" as former.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/thtrieu/darkflow/issues/264?email_source=notifications&email_token=AEIP7DOZYIFBYZHV22RVIADPXD7ELA5CNFSM4DNH2ANKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWHLNQQ#issuecomment-495892162, or mute the thread https://github.com/notifications/unsubscribe-auth/AEIP7DPOGFQ2EYSSZFPZSL3PXD7ELANCNFSM4DNH2ANA.
@Costyv95 Hello, I want to know how to set the path "gs://bucket_hand_detection_2" in "darkflow\defaults.py"? my python(3.7) can't find this path, it throw error; and what dose the "bucket" represent?
@Costyv95 Hello, I want to know how to set the path "gs://bucket_hand_detection_2" in "darkflow\defaults.py"? my python(3.7) can't find this path, it throw error; and what dose the "bucket" represent?
Same error but checkpoint are saved normally, so i don't know what is this error @Costyv95
thanks @Costyv95!
Hi, @Costyv95 Yolo trains and outputs validation loss , but after 1000 steps it throws an error. File FileNotFoundError: [Errno 2] No such file or directory: 'gsutil': 'gsutil'.
Error:
@akmeraki hello i met the same problem, did you find any solution about this error ?
@zhe0503 @akmeraki I hope its not too late but all i did was type "pip install gsutil" and it solved the problem!!
Hey guys. What would I need to do If I want to get the accuracy of the whole trained model ? For instance I am training my model and I stop the training at some point. Now I have the last saved checkpoint and I want to calculate the accuracy upto the last checkpoint. The files in the cpkt folder are named as,
checkpoint yolo-new-50.data-00000-of-00001 yolo-new-50.index yolo-new-50.meta yolo-new-50.profile
I would appreciate the help guys.
@Costyv95 I followed your kind instructions carefully but it seems that train.py does not recognize --val_... arguments. Would you please help me? Error is as below: ERROR - Invalid argument: --val_dataset
This happens because the code I gave you has some modifications for adaptive learning rate and there is one more change you have to do . You find it here: 124d55d
And you should add --val_dataset and val_annotation to arguments for having a validation loss.
It doesnt work for me. I get an error as below: ERROR - Invalid argument: --val_dataset
During training: step1 - loss 240.92623901367188 - moving ave loss 240.92623901367188 step 2 - loss 241.2866668701172 - moving ave loss 240.96228179931643 step 3 - loss 239.79562377929688 - moving ave loss 240.84561599731447
How do I add training accuracy and validation accuracy?
step1 - loss 240.92623901367188 - moving ave loss 240.92623901367188 - train 0.221 step 2 - loss 241.2866668701172 - moving ave loss 240.96228179931643 - train 0.222 step 3 - loss 239.79562377929688 - moving ave loss 240.84561599731447 - train 0.223 Finished 1 Epoch, validation 0.210