HKUDS / LightRAG

"LightRAG: Simple and Fast Retrieval-Augmented Generation"
https://arxiv.org/abs/2410.05779
MIT License
9.45k stars 1.16k forks source link

hf_demo.py question? #172

Closed lfreee closed 2 weeks ago

lfreee commented 3 weeks ago

Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Loading checkpoint shards: 100%|████████████████████| 4/4 [00:06<00:00, 1.59s/it] Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation. /data/anaconda3/envs/lfr_CLIP/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:563: UserWarning: num_beams is set to 1. However, early_stopping is set to True -- this flag is only used in beam-based generation modes. You should set num_beams>1 or unset early_stopping. warnings.warn( Setting pad_token_id to eos_token_id:128001 for open-end generation. Setting pad_token_id to eos_token_id:128001 for open-end generation. Setting pad_token_id to eos_token_id:128001 for open-end generation. Setting pad_token_id to eos_token_id:128001 for open-end generation.

LarFii commented 3 weeks ago

This should not affect normal operation.