logikon-ai / cot-eval

A framework for evaluating the effectiveness of chain-of-thought reasoning in language models.
https://huggingface.co/spaces/logikon/open_cot_leaderboard
MIT License
12 stars 2 forks source link

Evaluate: openbmb/Eurus-70b-sft #37

Closed ggbetz closed 7 months ago

ggbetz commented 7 months ago

Check upon issue creation:

Parameters:

NEXT_MODEL_PATH=openbmb/Eurus-70b-sft
NEXT_MODEL_REVISION=main
NEXT_MODEL_PRECISION=float16
MAX_LENGTH=2048 
GPU_MEMORY_UTILIZATION=0.8
VLLM_SWAP_SPACE=16

ToDos:

ggbetz commented 7 months ago
2024-04-08 15:42:15,981 - root - INFO - Loading vLLM model openbmb/Eurus-70b-sft
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py", line 716, in _get_config_dict
    config_dict = cls._dict_from_json_file(resolved_config_file)
  File "/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py", line 815, in _dict_from_json_file
    return json.loads(text)
  File "/usr/lib/python3.10/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib/python3.10/json/decoder.py", line 353, in raw_decode
    obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 2 column 44 (char 45)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/bin/cot-eval", line 8, in <module>
    sys.exit(main())
  File "/workspace/cot-eval/src/cot_eval/__main__.py", line 133, in main
    llm = VLLM(
  File "/usr/local/lib/python3.10/dist-packages/langchain_core/load/serializable.py", line 120, in __init__
    super().__init__(**kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py", line 339, in __init__
    values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
  File "/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py", line 1100, in validate_model
    values = validator(cls_, values)
  File "/usr/local/lib/python3.10/dist-packages/langchain_community/llms/vllm.py", line 88, in validate_environment
    values["client"] = VLLModel(
  File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/llm.py", line 109, in __init__
    self.llm_engine = LLMEngine.from_engine_args(engine_args)
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 366, in from_engine_args
    engine_configs = engine_args.create_engine_configs()
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/arg_utils.py", line 289, in create_engine_configs
    model_config = ModelConfig(
  File "/usr/local/lib/python3.10/dist-packages/vllm/config.py", line 111, in __init__
    self.hf_config = get_config(self.model, trust_remote_code, revision,
  File "/usr/local/lib/python3.10/dist-packages/vllm/transformers_utils/config.py", line 22, in get_config
    config = AutoConfig.from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py", line 1138, in from_pretrained
    config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py", line 631, in get_config_dict
    config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py", line 719, in _get_config_dict
    raise EnvironmentError(
OSError: It looks like the config file at '/scratch/slurm_tmpdir/job_1421666/huggingface/hub/models--openbmb--Eurus-70b-sft/snapshots/7d3da2e398fef5db6ef9c66ee7b7970638d6b38b/config.json' is not a valid JSON file.
ggbetz commented 7 months ago

Issue seems to have been fixed: https://huggingface.co/openbmb/Eurus-70b-sft/commit/cd41e82e6e2df512ebf9d49da8dfac693fde8d83 Will try again...