castorini / rank_llm

Repository for prompt-decoding using LLMs (GPT3.5, GPT4, Vicuna, and Zephyr)
http://rankllm.ai
Apache License 2.0
277 stars 36 forks source link

P3 - Trim Project Dependencies #66

Closed ronakice closed 5 months ago

ronakice commented 5 months ago

Pretty sure these project dependencies are not required:

faiss-gpu == 1.7.2
accelerate == 0.26.1

Pyserini uses faiss-cpu. Accelerate I believe is only used for training which we currently are not supporting.

sahel-sh commented 5 months ago

@AndreSlavescu if you leave a comment here, I can assign it to you as well, and all three of us can try trimming the requirements. I would like to be super sure before changing a working set of requirements.

sahel-sh commented 5 months ago

I confirm that accelerate is needed, I will check faiss-gpu next. Steps to reproduce the dependency error after removing accelerate are included below:

conda create -n dep_check python=3.10
conda activate dep_check

Remove accelerate and faiss_gpu from requirements.txt:

tqdm>=4.66.1
openai>=1.9.0
tiktoken>=0.5.2
transformers>=4.37.0
pyserini>=0.24.0
python-dotenv>=1.0.1
faiss-cpu>=1.7.2
ftfy>=6.1.3
fschat>=0.2.35

Install torch:

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

Install trimmed requirments:

pip install -r requirements.txt

Run the following:

CUDA_VISIBLE_DEVICES=6 python src/rank_llm/scripts/run_rank_llm.py  --model_path=castorini/rank_zephyr_7b_v1_full  --top_k_candidates=100  --dataset=dl21  --retrieval_method=bm25  --prompt_mode=rank_GPT  --context_size=4096 --variable_passages

You will get the following error:

Traceback (most recent call last):
  File "/store2/scratch/s8sharif/rank_llm_2/rank_llm/src/rank_llm/scripts/run_rank_llm.py", line 16, in <module>
    from rank_llm.retrieve_and_rerank import retrieve_and_rerank
  File "/store2/scratch/s8sharif/rank_llm_2/rank_llm/src/rank_llm/retrieve_and_rerank.py", line 6, in <module>
    from rank_llm.rerank.rank_listwise_os_llm import RankListwiseOSLLM
  File "/store2/scratch/s8sharif/rank_llm_2/rank_llm/src/rank_llm/rerank/rank_listwise_os_llm.py", line 5, in <module>
    from fastchat.model import load_model, get_conversation_template, add_model_args
  File "/home/s8sharif/.conda/envs/dep_check/lib/python3.10/site-packages/fastchat/model/__init__.py", line 1, in <module>
    from fastchat.model.model_adapter import (
  File "/home/s8sharif/.conda/envs/dep_check/lib/python3.10/site-packages/fastchat/model/model_adapter.py", line 30, in <module>
    from fastchat.model.compression import load_compress_model
  File "/home/s8sharif/.conda/envs/dep_check/lib/python3.10/site-packages/fastchat/model/compression.py", line 6, in <module>
    from accelerate import init_empty_weights
ModuleNotFoundError: No module named 'accelerate'
ronakice commented 5 months ago

ahh, weird they have it as an optional dependency.

ronakice commented 5 months ago

Guess if we specify as such it would work pip3 install fschat[model_worker]>=0.2.35?

ronakice commented 5 months ago

pip3 install "fschat[model_worker,webui]" is their official suggestion i believe

AndreSlavescu commented 5 months ago

interested

sahel-sh commented 5 months ago

I created a PR to fix this, I confirm that faissgpu is not needed as @ronakice mentioned.

sahel-sh commented 5 months ago

interested

sorry @AndreSlavescu I already created a PR for this one, Please feel free to review it

sahel-sh commented 5 months ago

PR #70 completes this work