Closed sethahrenbach closed 1 month ago
In prover/workers/generator.py
I changed line 32 to
llm = LLM(model=self.model_path, max_num_batched_tokens=8192, seed=seed, trust_remote_code=True, max_model_len=1096)
adding the max_model_length
and I got past the vLLM error.
I made a new configs/ file
It uses the same settings as the
configs/RMaxTS.py
file, but I changed todata_path
to point to a jsonl file with a single simple theorem. When I runpython -m prover.launch --config=configs/RMaxTSbin.py --log_dir=logs/RMaxTSbin_results
it results in the following error:which I believe is related to https://github.com/vllm-project/vllm/issues/2418
My new config file is: