meta-llama / llama

Inference code for Llama models
Other
56.51k stars 9.58k forks source link

OSError: Can't load the configuration of '../downloaded_plms\gpt2\base'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '../downloaded_plms\gpt2\base' is the correct path to a directory containing a config.json file #1192

Open sanzi116 opened 3 weeks ago

sanzi116 commented 3 weeks ago

Before submitting a bug, please make sure the issue hasn't been already addressed by searching through the FAQs and existing/past issues

Describe the bug

<Please provide a clear and concise description of what the bug is. If relevant, please include a minimal (least lines of code necessary) reproducible (running this will give us the same result as you get) code snippet. Make sure to include the relevant imports.>

Minimal reproducible example

<Remember to wrap the code in ```triple-quotes blocks```>

# sample code to repro the bug

Output

<Remember to wrap the output in ```triple-quotes blocks```>

<paste stacktrace and other outputs here>

Runtime Environment

Additional context Add any other context about the problem or environment here.