UKPLab / gpl

Powerful unsupervised domain adaptation method for dense retrieval. Requires only unlabeled corpus and yields massive improvement: "GPL: Generative Pseudo Labeling for Unsupervised Domain Adaptation of Dense Retrieval" https://arxiv.org/abs/2112.07577
Apache License 2.0
315 stars 39 forks source link

How to create dataset to train in GPL from normal set of domain specific word docs or pdfs. #3

Open kingafy opened 2 years ago

kingafy commented 2 years ago

First of all thanks a lot for bringing out this unique paper. After going through the paper I wanted to try out this approach but am little confused with the initial data to be created for training in GPL. Currently all my domain corpus content are in Word files or pdf and I don't have any labelled data ,As your paper claims it can be used for unlabeled data can you please guide how to create the initial input data to pass through this Method. Also kindly share if you have any pretrained models available to experiment the method?

kwang2049 commented 2 years ago

Hi @kingafy, GPL needs only a corpus.jsonl file (data sample is here) for minimal running.

Specifically, you need three steps:

  1. Prepare your corpus in the same format as this data sample mentioned above;
  2. Put your corpus.jsonl under a folder, e.g. named as "generated" for data loading and data generation by GPL;
  3. Call gpl.train with the folder path as an input argument:
    python -m gpl.train \
    --path_to_generated_data "generated" \
    --output_dir "output"

I will include more explanations about this in the readme in the future.

junefeld commented 2 years ago

I get this error:

  File "train.py", line 9, in <module>
    queries_per_passage=-1)
  File "E:\Data_Science\virtual_envs\py36\lib\site-packages\gpl\train.py", line 124, in train
    corpus, gen_queries, gen_qrels = GenericDataLoader(path_to_generated_data).load(split="train")
  File "E:\Data_Science\virtual_envs\py36\lib\site-packages\beir\datasets\data_loader.py", line 63, in load
    self.check(fIn=self.query_file, ext="jsonl")
  File "E:\Data_Science\virtual_envs\py36\lib\site-packages\beir\datasets\data_loader.py", line 30, in check
    raise ValueError("File {} not present! Please provide accurate file.".format(fIn))
ValueError: File generated\queries.jsonl not present! Please provide accurate file.

Script:

import gpl

#dataset = 'fiqa'
gpl.train(path_to_generated_data="generated",
          output_dir="output",
          mnrl_output_dir="output",
          mnrl_evaluation_output="output",
          new_size=-1,
          queries_per_passage=-1)

generated data comes from the suggested example (only corpus.jsonl in the folder).

junefeld commented 2 years ago

anyone have any luck?

kwang2049 commented 2 years ago

Hi @christopherfeld, could you please try this toy example: https://github.com/UKPLab/gpl/issues/5#issuecomment-1144256494 and let me know whether it is runnable in your case?

junefeld commented 2 years ago

No luck. To give some baseline...

I tried your code from April 14, above to no avail. I tried the toy example from the comment above to no avail.

My folder structure looks like this:

code (per your request: https://github.com/UKPLab/gpl/issues/5#issuecomment-1144256494): python -m gpl.train --path_to_generated_data "generated/" --base_ckpt "distilbert-base-uncased" --gpl_score_function "dot" --batch_size_gpl 4 --gpl_steps 100 --new_size 10 --queries_per_passage 1 --output_dir "output/corpus.jsonl" --evaluation_data "corpus" --evaluation_output "evaluation/corpus.jsonl" --generator "BeIR/query-gen-msmarco-t5-base-v1" --retrievers "msmarco-distilbert-base-v3" "msmarco-MiniLM-L-6-v3" --retriever_score_functions "cos_sim" "cos_sim" --cross_encoder "cross-encoder/ms-marco-MiniLM-L-6-v2" --qgen_prefix "qgen"

Resultant Error: image

Thoughts: Got me thinking that the qgen prefix might be the issue so placed that to "", which still adds a '-'...'. Tried to rerun the code after rename the outputs in the generated folder to remove the qgen and -... then ran into an index error: image

kwang2049 commented 2 years ago

Hi @christopherfeld, I have created a google colab notebook showing the running results of the toy example: https://colab.research.google.com/drive/1Wis4WugIvpnSAc7F7HGBkB38lGvNHTtX?usp=sharing Please have a look:).

BTW, I think the error might be due to the export command does not work on windows (I am not really familiar with windows).

junefeld commented 2 years ago

thanks! I ran this on my personal M1 and it works! Now am just waiting on my work Mac to get here :)