usail-hkust / LLMTSCS

Official code for article "LLMLight: Large Language Models as Traffic Signal Control Agents".
157 stars 18 forks source link

ValueError: Unrecognized keyword arguments passed to Embedding: {'input_length': 8} #6

Closed DA21S321D closed 6 months ago

DA21S321D commented 6 months ago

Here is the full record :

(base) gq@gq-Inspiron-7590:~/LLMTSCS$ python run_advanced_mplight.py --dataset hangzhou --traffic_file anon_4_4_hangzhou_real.json --proj_name TSCS
2024-03-26 17:55:31.793626: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-03-26 17:55:32.423335: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
/home/gq/conda/lib/python3.10/site-packages/transformers/utils/generic.py:441: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
  _torch_pytree._register_pytree_node(
/home/gq/conda/lib/python3.10/site-packages/torch/cuda/__init__.py:628: UserWarning: Can't initialize NVML
  warnings.warn("Can't initialize NVML")
/home/gq/conda/lib/python3.10/site-packages/transformers/utils/generic.py:309: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
  _torch_pytree._register_pytree_node(
num_intersections: 16
anon_4_4_hangzhou_real.json
0
start_traffic
after_traffic
traffic to join 0
wandb: Currently logged in as: seelowst. Use `wandb login --relogin` to force relogin
wandb: Tracking run with wandb version 0.16.5
wandb: Run data is saved locally in /home/gq/LLMTSCS/wandb/run-20240326_175537-sdqo31jn
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run round_0
wandb: ⭐️ View project at https://wandb.ai/seelowst/TSCS
wandb: 🚀 View run at https://wandb.ai/seelowst/TSCS/runs/sdqo31jn/workspace
round 0 starts
==============  generator =============
Process Process-1:
Traceback (most recent call last):
  File "/home/gq/conda/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/home/gq/conda/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/home/gq/LLMTSCS/utils/utils.py", line 33, in pipeline_wrapper
    round_results = ppl.run(round=i, multi_process=False)
  File "/home/gq/LLMTSCS/utils/pipeline.py", line 158, in run
    generator_wrapper(cnt_round=cnt_round,
  File "/home/gq/LLMTSCS/utils/pipeline.py", line 72, in generator_wrapper
    generator = Generator(cnt_round=cnt_round,
  File "/home/gq/LLMTSCS/utils/generator.py", line 30, in __init__
    agent = DIC_AGENTS[agent_name](
  File "/home/gq/LLMTSCS/models/network_agent.py", line 35, in __init__
    self.q_network = self.build_network()
  File "/home/gq/LLMTSCS/models/advanced_mplight_agent.py", line 28, in build_network
    _p = Activation('sigmoid')(Embedding(2, 4, input_length=8)(dic_input_node["feat1"]))
  File "/home/gq/conda/lib/python3.10/site-packages/keras/src/layers/core/embedding.py", line 81, in __init__
    super().__init__(**kwargs)
  File "/home/gq/conda/lib/python3.10/site-packages/keras/src/layers/layer.py", line 264, in __init__
    raise ValueError(
ValueError: Unrecognized keyword arguments passed to Embedding: {'input_length': 8}
traffic finish join 0
Gungnir2099 commented 6 months ago

It might be a system error, you may need to solve the UserWarning: Can't initialize NVML warnings.warn("Can't initialize NVML") warning first.

DA21S321D commented 6 months ago

Thanks for your kind patience. The solution is turn "{'input_length': 8}" into "input_shape(8,)" caused by a version problem. Also the peft latest version 0.10.0 have deprecated "prepare_model_for_int8_training" as here happens prepare_model_for_int8_training deprecated. Since the version causes much chaos, could you please make the requirements more detailed?

Gungnir2099 commented 6 months ago

Thank you for your feedback. I have added the peft requirement in the README.

BenjaminBossan commented 5 months ago

Also the peft latest version 0.10.0 have deprecated "prepare_model_for_int8_training" as here happens prepare_model_for_int8_training deprecated. Since the version causes much chaos, could you please make the requirements more detailed?

Note that you can just use prepare_model_for_kbit_training instead of prepare_model_for_int8_training. If this is the only issue with PEFT, I would recommend to replace the function instead of fixing an old PEFT version.