2024-09-13T17:32:31.647434Z WARN tokenizers::tokenizer::serialization: /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.19.1/src/tokenizer/serialization.rs:159: Warning: Token '<|reserved_special_token_249|>' was expected to have ID '128254' but was given ID 'None' 6372024-09-13T17:32:31.647437Z WARN tokenizers::tokenizer::serialization: /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.19.1/src/tokenizer/serialization.rs:159: Warning: Token '<|reserved_special_token_250|>' was expected to have ID '128255' but was given ID 'None' 6382024-09-13T17:32:31.649590Z INFO text_generation_router: router/src/main.rs:357: Using config Some(Llama) 6392024-09-13T17:32:31.649606Z WARN text_generation_router: router/src/main.rs:384: Invalid hostname, defaulting to 0.0.0.0 6402024-09-13T17:32:32.064496Z INFO text_generation_router::server: router/src/server.rs:1572: Warming up model
The pod consistently restarts right after the "Warming up model" log entry. This restart behavior leads to the same warnings and info logs repeating, indicating a potential issue with the tokenizers library or model initialization process.
2024-09-13T17:32:31.647434Z WARN tokenizers::tokenizer::serialization: /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.19.1/src/tokenizer/serialization.rs:159: Warning: Token '<|reserved_special_token_249|>' was expected to have ID '128254' but was given ID 'None' 6372024-09-13T17:32:31.647437Z WARN tokenizers::tokenizer::serialization: /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.19.1/src/tokenizer/serialization.rs:159: Warning: Token '<|reserved_special_token_250|>' was expected to have ID '128255' but was given ID 'None' 6382024-09-13T17:32:31.649590Z INFO text_generation_router: router/src/main.rs:357: Using config Some(Llama) 6392024-09-13T17:32:31.649606Z WARN text_generation_router: router/src/main.rs:384: Invalid hostname, defaulting to 0.0.0.0 6402024-09-13T17:32:32.064496Z INFO text_generation_router::server: router/src/server.rs:1572: Warming up model
The pod consistently restarts right after the "Warming up model" log entry. This restart behavior leads to the same warnings and info logs repeating, indicating a potential issue with the tokenizers library or model initialization process.