eole-nlp / eole

Open language modeling toolkit based on PyTorch
https://eole-nlp.github.io/eole
MIT License
42 stars 9 forks source link

pydantic error due to additional inputs during fine-tuning attempt #3

Closed l-k-11235 closed 3 months ago

l-k-11235 commented 3 months ago

I had a pydantic error when I tried to launch a finetuning running this command:

eole train -config my_config.yaml

The error starts with:

Traceback (most recent call last):
  File "/usr/local/bin/eole", line 33, in <module>
    sys.exit(load_entry_point('EOLE', 'console_scripts', 'eole')())
  File "workdir/eole/eole/bin/main.py", line 39, in main
    bin_cls.run(args)
  File "/workdir/eole/eole/bin/run/train.py", line 67, in run
    config = cls.build_config(args)
  File "/workdir/eole/eole/bin/run/__init__.py", line 42, in build_config
    config = cls.config_class(**config_dict)
  File "/usr/local/lib/python3.10/dist-packages/pydantic/main.py", line 176, in __init__
    self.__pydantic_validator__.validate_python(data, self_instance=self)
pydantic_core._pydantic_core.ValidationError: 51 validation errors for TrainConfig
src_subword_model

  Extra inputs are not permitted [type=extra_forbidden, input_value='//models/llama3-8b/bpe.model', input_type=str]
    For further information visit https://errors.pydantic.dev/2.7/v/extra_forbidden
francoishernandez commented 3 months ago

Such transform specific settings live in the transforms_configs section now. See examples in the provided recipes, e.g.: https://github.com/eole-nlp/eole/blob/6d1e1be43a399a053db7b0ef7a6da58065091b78/recipes/llama3/llama-inference.yaml#L3-L9 https://github.com/eole-nlp/eole/blob/6d1e1be43a399a053db7b0ef7a6da58065091b78/recipes/wiki_103/wiki_103.yaml#L20-L29