logikon-ai / cot-eval

A framework for evaluating the effectiveness of chain-of-thought reasoning in language models.
https://huggingface.co/spaces/logikon/open_cot_leaderboard
MIT License
5 stars 1 forks source link

Evaluate: Qwen/Qwen-72B-Chat #31

Closed ggbetz closed 3 months ago

ggbetz commented 3 months ago

Check upon issue creation:

Parameters:

NEXT_MODEL_PATH=Qwen/Qwen-72B-Chat
NEXT_MODEL_REVISION=main
NEXT_MODEL_PRECISION=bfloat16
MAX_LENGTH=2048 
GPU_MEMORY_UTILIZATION=0.8
VLLM_SWAP_SPACE=16

ToDos:

yakazimir commented 3 months ago

hey @ggbetz , I got this error

2024-04-04T19:20:54.811812368Z 2024-04-04 19:20:54,811  INFO worker.py:1752 -- Started a local Ray instance.
2024-04-04T19:20:56.384661292Z INFO 04-04 19:20:56 llm_engine.py:79] Initializing an LLM engine with config: model='Qwen/Qwen-72B-Chat', tokenizer='Qwen/Qwen-72B-Chat', tokenizer_mode=auto, revision=main, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=8, disable_custom_all_reduce=True, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, seed=42)
2024-04-04T19:20:57.002894468Z Traceback (most recent call last):
2024-04-04T19:20:57.002931462Z   File "/usr/local/bin/cot-eval", line 8, in <module>
2024-04-04T19:20:57.003023897Z     sys.exit(main())
2024-04-04T19:20:57.003061582Z   File "/workspace/cot-eval/src/cot_eval/__main__.py", line 133, in main
2024-04-04T19:20:57.003116091Z     llm = VLLM(
2024-04-04T19:20:57.003128376Z   File "/usr/local/lib/python3.10/dist-packages/langchain_core/load/serializable.py", line 120, in __init__
2024-04-04T19:20:57.003211783Z     super().__init__(**kwargs)
2024-04-04T19:20:57.003222555Z   File "/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py", line 339, in __init__
2024-04-04T19:20:57.003344399Z     values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
2024-04-04T19:20:57.003354098Z   File "/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py", line 1100, in validate_model
2024-04-04T19:20:57.003668888Z     values = validator(cls_, values)
2024-04-04T19:20:57.003677976Z   File "/usr/local/lib/python3.10/dist-packages/langchain_community/llms/vllm.py", line 88, in validate_environment
2024-04-04T19:20:57.003737816Z     values["client"] = VLLModel(
2024-04-04T19:20:57.003747015Z   File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/llm.py", line 109, in __init__
2024-04-04T19:20:57.003822135Z     self.llm_engine = LLMEngine.from_engine_args(engine_args)
2024-04-04T19:20:57.003832476Z   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 371, in from_engine_args
2024-04-04T19:20:57.003961695Z     engine = cls(*engine_configs,
2024-04-04T19:20:57.003972015Z   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 109, in __init__
2024-04-04T19:20:57.004036705Z     self._init_tokenizer()
2024-04-04T19:20:57.004047206Z   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 175, in _init_tokenizer
2024-04-04T19:20:57.004127917Z     self.tokenizer: TokenizerGroup = TokenizerGroup(
2024-04-04T19:20:57.004138469Z   File "/usr/local/lib/python3.10/dist-packages/vllm/transformers_utils/tokenizer.py", line 100, in __init__
2024-04-04T19:20:57.004197427Z     self.tokenizer = get_tokenizer(self.tokenizer_id, **tokenizer_config)
2024-04-04T19:20:57.004206334Z   File "/usr/local/lib/python3.10/dist-packages/vllm/transformers_utils/tokenizer.py", line 30, in get_tokenizer
2024-04-04T19:20:57.004264691Z     tokenizer = AutoTokenizer.from_pretrained(
2024-04-04T19:20:57.004273489Z   File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/tokenization_auto.py", line 818, in from_pretrained
2024-04-04T19:20:57.004548730Z     tokenizer_class = get_class_from_dynamic_module(class_ref, pretrained_model_name_or_path, **kwargs)
2024-04-04T19:20:57.004557848Z   File "/usr/local/lib/python3.10/dist-packages/transformers/dynamic_module_utils.py", line 489, in get_class_from_dynamic_module
2024-04-04T19:20:57.004722468Z     final_module = get_cached_module_file(
2024-04-04T19:20:57.004731055Z   File "/usr/local/lib/python3.10/dist-packages/transformers/dynamic_module_utils.py", line 315, in get_cached_module_file
2024-04-04T19:20:57.004863671Z     modules_needed = check_imports(resolved_module_file)
2024-04-04T19:20:57.004872308Z   File "/usr/local/lib/python3.10/dist-packages/transformers/dynamic_module_utils.py", line 180, in check_imports
2024-04-04T19:20:57.004958450Z     raise ImportError(
2024-04-04T19:20:57.004967138Z ImportError: This modeling file requires the following packages that were not found in your environment: tiktoken. Run `pip install tiktoken`
ggbetz commented 3 months ago

Right, tiktoken has been added as a requirement to vllm in 0.4.0, it wasn't in 0.3.3.

I will update our docker container to vllm==0.4.0. Keeping the old container with appropriate tag.

I will also test and run the evals for the qwen models.

ggbetz commented 3 months ago

Let's stick with the Qwen1.5 family ...