usail-hkust / LLMTSCS

Official code for article "LLMLight: Large Language Models as Traffic Signal Control Agents".
162 stars 20 forks source link

Log folder not created correctly when running run_open_LLM_with_vllm.py #25

Open aligoldenhat opened 1 week ago

aligoldenhat commented 1 week ago

I am running the following command for fine-tuning a model using run_open_LLM_with_vllm.py:

!python run_open_LLM_with_vllm.py --llm_model llama \
                       --llm_path /content/llama \
                       --dataset hangzhou \
                       --traffic_file anon_4_4_hangzhou_real.json \
                       --proj_name TSCS

I want save the logs in the {llm_model}_logs folder (using this log for finetunning). However, instead of creating this folder, the script creates a fails folder and stores a .json file there. The expected behavior is that a folder named {llm_model}_logs (like llama_logs) should be created to store the fine-tuning logs.

Could you help me fix this issue so that the fine-tuning logs are saved correctly in the {llm_model}_logs folder? If any additional configuration is required, or if this is an issue with the script, please let me know.

Gungnir2099 commented 1 week ago

Thank you for your suggestion. I have updated the code.

aligoldenhat commented 1 week ago

Thanks for the recent update. I noticed that while log_dir is now created if unavailable, failure messages are still being saved in the fails folder, and nothing is stored in the log_dir as expected.

The expected behavior is to save the logs for fine-tuning in{llm_model}_logs (similar to gpt_logs in run_chatgpt.py), but that’s not happening. I reviewed the LLM_Inference_VLLM class in llm_aft_trainer.py and found that LOG_DIR isn’t being used for logging there.

Is run_open_LLM_with_vllm.py meant to generate logs for fine-tuning, like run_chatgpt.py? If not, could you suggest how to modify the code to save logs for fine-tuning in {llm_model}_logs?

Thanks again for your help!

Gungnir2099 commented 1 week ago

Thank you for the feedback. The response of LLMs will be saved in self.dic_path["PATH_TO_WORK_DIRECTORY"] (as stated in line 1066 ) with run_open_LLM_with_vllm.py. You can either change it to another directory or find the log file in self.dic_path["PATH_TO_WORK_DIRECTORY"].