neulab / knn-transformers

PyTorch + HuggingFace code for RetoMaton: "Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval" (ICML 2022), including an implementation of kNN-LM and kNN-MT
MIT License
269 stars 23 forks source link

TypeError: pre_forward_hook() missing 1 required positional argument: 'labels' #7

Closed YsylviaUC closed 1 year ago

YsylviaUC commented 2 years ago

Hi~, I copied kmm_lm.py files into my project and used _knn_wrapper.break_into( my model)_and tried to save a datastore as what instructions do in KNN-MT, but unfortunately, I met some problems when running the code:

image

urialon commented 2 years ago

Hi @YsylviaUC , Thank you for your interest in our work and for reporting this!

I just fixed that, and set input_ids to be the labels when labels are not specified.

But I am wondering how did it happen, because it doesn't happen in my scripts (that is, labels are always provided). Which model are you using? Are you using --predict_with_generate by accident?

Best, Uri

YsylviaUC commented 2 years ago

Yes! I used --predict_with_generate by accident and I find this mistake just now. Thank you so much!!

urialon commented 2 years ago

Great! It's important not to use --predict_with_generate when saving the datastore, because we want the datastore to be saved according to the training labels, not to the random/generated labels.

Let me know if you have any more questions, Uri

YsylviaUC commented 2 years ago

Great! It's important not to use --predict_with_generate when saving the datastore, because we want the datastore to be saved according to the training labels, not to the random/generated labels.

Let me know if you have any more questions, Uri

Hi~,I have two questions:

  1. what is the usage of --knn_drop in https://github.com/neulab/knn-transformers/blob/fdb919c29be1792cdcb3a3aadbbee26e4c7bd338/run_translation.py#L279 ? It seems that it is not used.

3.How could I get the example not just the representation or vectors? Because I am curious about the retrieved example and wanna know more details ;D

urialon commented 2 years ago

Hi @YsylviaUC , Good catch!

This is a flag that we used to run some experiments on randomly deciding on whether or not to perform a kNN search, similarly to some of the experiments with kNN-LM in the paper, which allowed to measure some points between kNN-LM with 0% fraction of saved searches, and the base LM that has 100% saved searches:

image

So for example, --knn_drop=0.3 meant that 30% of the timesteps used the standard LM, and 70% of the time steps used kNN.

Eventually, we deleted the code for this functionality because we believed that it will not be useful for most users, but accidentally kept the flags, which we should indeed delete.

Thank you for noticing this! Uri