Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 100%|████████████████████| 4/4 [00:06<00:00, 1.59s/it]
Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.
/data/anaconda3/envs/lfr_CLIP/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:563: UserWarning: num_beams is set to 1. However, early_stopping is set to True -- this flag is only used in beam-based generation modes. You should set num_beams>1 or unset early_stopping.
warnings.warn(
Setting pad_token_id to eos_token_id:128001 for open-end generation.
Setting pad_token_id to eos_token_id:128001 for open-end generation.
Setting pad_token_id to eos_token_id:128001 for open-end generation.
Setting pad_token_id to eos_token_id:128001 for open-end generation.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Loading checkpoint shards: 100%|████████████████████| 4/4 [00:06<00:00, 1.59s/it] Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation. /data/anaconda3/envs/lfr_CLIP/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:563: UserWarning:
num_beams
is set to 1. However,early_stopping
is set toTrue
-- this flag is only used in beam-based generation modes. You should setnum_beams>1
or unsetearly_stopping
. warnings.warn( Settingpad_token_id
toeos_token_id
:128001 for open-end generation. Settingpad_token_id
toeos_token_id
:128001 for open-end generation. Settingpad_token_id
toeos_token_id
:128001 for open-end generation. Settingpad_token_id
toeos_token_id
:128001 for open-end generation.