Closed oan-J closed 4 months ago
I wonder how did u fix this problem cause it seems i ran into the same issue too
Token indices sequence length is longer than the specified maximum sequence length for this model (559 > 512). Running this sequence through the model will result in indexing errors
Hi, when I was running
./reproduce.sh ircot flan-t5-base hotpotqa
, I faced a warning:I am not sure if this is right, please let me know if there's anything I need to fix.
These information might be relevant, so I put it here:
I changed retriever_server port Instead of
uvicorn serve:app --port 8000 --app-dir retriever_server
, I changed my port to 9201 since port 8000 was used I ran :uvicorn serve:app --port 9201 --app-dir retriever_server
Also, I made these changes: In predict.py and run.py, I set the env_variables["RETRIEVER_PORT"] to 9201, since str(retriever_address["port"]) can't get the right port:I was using bf16 Since I got CUDA Out of Memory, I ran
MODEL_NAME=flan-t5-base-bf16 RETRIEVER_PORT=9201 /mnt/.conda/envs/ircot/bin/uvicorn serve:app --port 8010 --app-dir llm_server
. Also, I changed base_configs/ircot_flan_t5_base_hotpotqa.jsonnet:About the localhost (I feel like sth. is wrong about the outputs but I am not so sure) First, I started elasticsearch and I got these on http://127.0.0.1:9200
Second, I started retriever_server, and I got these on http://127.0.0.1:9201
Third, I started
MODEL_NAME=flan-t5-base-bf16 RETRIEVER_PORT=9201 /mnt/.conda/envs/ircot/bin/uvicorn serve:app --port 8010 --app-dir llm_server
, and I got these on http://127.0.0.1:8010/Thank you in advance!