ozansener / active_learning_coreset

Source code for ICLR 2018 Paper: Active Learning for Convolutional Neural Networks: A Core-Set Approach
MIT License
259 stars 41 forks source link

Issues with greedy_facility_location.py #3

Closed pavanteja295 closed 2 years ago

pavanteja295 commented 6 years ago

Hi, I think the file greedy_facility_location.py is not executable because I can see in the line 31 #for lab in dat['gt_f'].shape[0]): There are some unbalanced parenthesis

line 38 no = numpy.argmax(d) . d is not defined.

It would be great if you can share your code which you used for the results obtained in the ICLR paper

pavanteja295 commented 6 years ago

Also according to the paper you should be using Subset s for calculating the distance matrix if I'm not wrong. But in this code you are not computing the distance matrix with the subset . This file is also not error free . Can you please fix the errors?

pavanteja295 commented 6 years ago

Most of the code is actually not complete. The implementation details provided in the paper are different from those that in the repo. Can you please let us know why?

ozansener commented 6 years ago

This is only the active learning part. The actual training part is not here. Unfortunately, we can not open source it but it is based on http://torch.ch/blog/2015/07/30/cifar.html.

If you only need the greedy solver, you can use this active learning module: https://github.com/google/active-learning/blob/master/sampling_methods/kcenter_greedy.py which also implements our method.

Full solver is functional if you generate and pickle the embeddings. I will add a how-to document for it.

pavanteja295 commented 6 years ago

Can you please let me know the implementation details of the method. Like learning rate epochs per each iteration.. Etc etc

On Thu, Sep 20, 2018, 12:06 AM Ozan Sener notifications@github.com wrote:

This is only the active learning part. The actual training part is not here. Unfortunately, we can not open source it but it is based on http://torch.ch/blog/2015/07/30/cifar.html.

If you only need the greedy solver, you can use this active learning module: https://github.com/google/active-learning/blob/master/sampling_methods/kcenter_greedy.py which also implements our method.

Full solver is functional if you generate and pickle the embeddings. I will add a how-to document for it.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/ozansener/active_learning_coreset/issues/3#issuecomment-422912508, or mute the thread https://github.com/notifications/unsubscribe-auth/AajsNy3H64YcmnWMPdDKLueB_UHLIa6zks5uco7IgaJpZM4WVjoN .

ozansener commented 6 years ago

We trained for 200 Epochs and half the LR at every 50 epochs. We are using random horizontal flips. We subtracted average R,G,B values from respective channels. And, whitened images before training.

pavanteja295 commented 6 years ago

Cool thanks a lot Ozan. But it was mentioned that lr was constant with 0.0001 and rms prop so it's a bit conflicting. Also these settings are valid for cifar-100 also I hope. Most importantly for reducing the weights of the input neurons, alpha in the paper you are using regularizer I suppose?

On Thu, Sep 20, 2018, 12:21 AM Ozan Sener notifications@github.com wrote:

We trained for 200 Epochs and half the LR at every 50 epochs. We are using random horizontal flips. We subtracted average R,G,B values from respective channels. And, whitened images before training.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/ozansener/active_learning_coreset/issues/3#issuecomment-422917747, or mute the thread https://github.com/notifications/unsubscribe-auth/AajsNwjyosJ8qF41df5MsUb5xoDQfqa8ks5ucpI0gaJpZM4WVjoN .

ozansener commented 6 years ago

What is alpha? I do not remember any alpha in the paper.

Cifar 10 and 100 uses the same script.

As I mentioned, the network is exactly same as here: http://torch.ch/blog/2015/07/30/cifar.html We only used its Tensorflow version. They use SGD, we use RMSprop thats the only difference. You can use that script and save the feature embeddings to file and use our code for active learning.

pavanteja295 commented 6 years ago

Sorry my bad.

Also the ratio of labelled images mentioned in the graphs refer to total number of labelled images ?

Ratio of labelled images = [0.1, 0.2, ...1] So are you training with 5k samples when you mean 0.1 for which you get an accuracy of ~30% ?

On Thu, Sep 20, 2018 at 12:33 AM Ozan Sener notifications@github.com wrote:

What is alpha? I do not remember any alpha in the paper.

Cifar 10 and 100 uses the same script.

As I mentioned, the network is exactly same as here: http://torch.ch/blog/2015/07/30/cifar.html We only used its Tensorflow version. They use SGD, wee use RMSprop thats the only difference. You can use that script and save the feature embeddings to file and use our code for active learning.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/ozansener/active_learning_coreset/issues/3#issuecomment-422921498, or mute the thread https://github.com/notifications/unsubscribe-auth/AajsN7XGxlVLo-sKQb-9E_Ifx5TLSpmeks5ucpTqgaJpZM4WVjoN .

ozansener commented 6 years ago

Yes, the ratio is to the full training set.

fatemehtd commented 5 years ago

This is only the active learning part. The actual training part is not here. Unfortunately, we can not open source it but it is based on http://torch.ch/blog/2015/07/30/cifar.html.

If you only need the greedy solver, you can use this active learning module: https://github.com/google/active-learning/blob/master/sampling_methods/kcenter_greedy.py which also implements our method.

Full solver is functional if you generate and pickle the embeddings. I will add a how-to document for it.

Hi, Do you plan to add a "how_to" document anytime soon?!? It would be a great help to understand your method.

Thanks

ozansener commented 5 years ago

I will make an update soon probably in the month of November with the detailed readme and training details.

jongchyisu commented 5 years ago

Hi, I'm also trying to run your code. I'm running the full_solver_gurobi.py, but I don't know how to use this script. What's the "Model" and "UB" in this file? and what's the format(shape) of the embeddings? Thanks in advance!

fatemehtd commented 5 years ago

Thanks Ozan, I ran your code, and final results are some models saved as "*.sol" format( such as "s_100_solution_0.055231944963340884.sol"). Now how could I find which points have been chosen to be sent to the oracle?

ozansener commented 5 years ago

@redsadaf

You can use the script at https://github.com/ozansener/active_learning_coreset/blob/master/coreset/gurobi_solution_parser.py to parse the results

fatemehtd commented 5 years ago

Hi Ozan, Do you still have a plan to make an update soon with the detailed readme and training details?

linlinlin1993 commented 3 years ago

Hi Ozan, Could you please update the details of training settings and how to use your code?

ozansener commented 2 years ago

Please use the code from: https://github.com/google/active-learning/blob/master/sampling_methods/kcenter_greedy.py We are not supporting this anymore.