InternLM / lmdeploy

LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
https://lmdeploy.readthedocs.io/en/latest/
Apache License 2.0
4.35k stars 390 forks source link

[Bug] 使用lmdeploy serve开启internvl-v1-5后一定输出到最长长度 #1958

Closed sunzx8 closed 2 months ago

sunzx8 commented 3 months ago

Checklist

Describe the bug

1.session length长度不一致,使用lmdeploy输出时选择长度为4096,原本demo和正常启动2048长度就能输出完的ocr任务,现在仅仅三分之一就不够了2.使用命令lmdeploy serve api_server /home/ubuntu/shawn/hf_ms_model/InternVL-Chat-V1-5 --server-port 6688 --tp 8 --session-len 8192

  1. 指定开启一个长度4096/8192的internvl-V1-5服务,发现请求时不管问什么问题都一定会返回我整整4096/8192个token

Reproduction

lmdeploy serve api_server /home/ubuntu/shawn/hf_ms_model/InternVL-Chat-V1-5 --server-port 6688 --tp 8 --session-len 8192

请求代码: from openai import OpenAI import json import os from mimetypes import guess_type import base64 import multiprocessing import time def local_image_to_data_url(image_path):

Guess the MIME type of the image based on the file extension

mime_type, _ = guess_type(image_path)
if mime_type is None:
    mime_type = 'application/octet-stream'  # Default MIME type if none is found

# Read and encode the image file
with open(image_path, "rb") as image_file:
    base64_encoded_data = base64.b64encode(image_file.read()).decode('utf-8')

# Construct the data URL
return f"data:{mime_type};base64,{base64_encoded_data}"

image_path='/home/ubuntu/shawn/lmm_data/ocr-type/1.jpg' client = OpenAI(api_key='YOUR_API_KEY', base_url='http://172.16.16.13:6688/v1') model_name = "/home/ubuntu/shawn/hf_ms_model/InternVL-Chat-V1-5" data_url = local_image_to_data_url(image_path) start=time.time() response = client.chat.completions.create( model=model_name, messages=[{ 'role': 'user', 'content': [{ 'type': 'text', 'text': '输出图片内容的markdown格式,如果有表格,则输出为html格式', }, { 'type': 'image_url', 'image_url': { 'url': data_url, }, }], }], temperature=0.8, top_p=0.8) print(response) print('time',time.time()-start)

图片为 image

Environment

accelerate                0.32.1
addict                    2.4.0
aiofiles                  23.2.1
aliyun-python-sdk-core    2.15.1
aliyun-python-sdk-kms     2.16.3
altair                    5.3.0
annotated-types           0.7.0
anyio                     4.4.0
asttokens                 2.4.1
attrdict                  2.0.1
attrs                     23.2.0
Babel                     2.15.0
bce-python-sdk            0.9.17
beautifulsoup4            4.12.3
blinker                   1.8.2
boto3                     1.34.135
botocore                  1.34.135
bottle                    0.12.25
cachetools                5.3.3
certifi                   2024.6.2
cffi                      1.16.0
charset-normalizer        3.3.2
click                     8.1.7
cmake                     3.29.6
colorama                  0.4.6
coloredlogs               15.0.1
comm                      0.2.2
common                    0.1.2
contourpy                 1.2.1
crcmod                    1.7
cryptography              42.0.8
cssselect                 1.2.0
cssutils                  2.11.1
cycler                    0.12.1
Cython                    3.0.10
data                      0.4
debugpy                   1.6.7
decorator                 5.1.1
deepspeed                 0.13.5
defusedxml                0.7.1
dnspython                 2.6.1
dual                      0.0.10
dynamo3                   0.4.10
einops                    0.8.0
email_validator           2.2.0
et-xmlfile                1.1.0
exceptiongroup            1.2.0
executing                 2.0.1
fastapi                   0.111.0
fastapi-cli               0.0.4
ffmpy                     0.3.2
filelock                  3.14.0
fire                      0.6.0
flash-attn                2.3.6
Flask                     3.0.3
flask-babel               4.0.0
flatbuffers               24.3.25
flywheel                  0.5.4
fonttools                 4.53.0
fsspec                    2024.6.0
funcsigs                  1.0.2
future                    1.0.0
gradio                    3.47.1
gradio_client             0.6.0
h11                       0.14.0
hjson                     3.1.0
httpcore                  1.0.5
httptools                 0.6.1
httpx                     0.27.0
huggingface-hub           0.23.2
humanfriendly             10.0
icecream                  2.1.3
idna                      3.7
imageio                   2.34.2
imgaug                    0.4.0
importlib_metadata        7.2.1
importlib_resources       6.4.0
ipykernel                 6.29.4
ipython                   8.18.1
itsdangerous              2.2.0
jedi                      0.19.1
Jinja2                    3.1.4
jmespath                  0.10.0
joblib                    1.4.2
jsonschema                4.22.0
jsonschema-specifications 2023.12.1
jupyter_client            8.6.2
jupyter_core              5.7.2
kiwisolver                1.4.5
lazy_loader               0.4
lit                       18.1.7
lmdb                      1.4.1
lmdeploy                  0.5.0
lxml                      5.2.2
Markdown                  3.6
markdown-it-py            3.0.0
MarkupSafe                2.1.5
matplotlib                3.9.0
matplotlib-inline         0.1.7
mdurl                     0.1.2
mmengine-lite             0.10.4
model-index               0.1.11
more-itertools            10.3.0
mpmath                    1.3.0
nest_asyncio              1.6.0
networkx                  3.2.1
ninja                     1.11.1.1
numpy                     1.26.4
nvidia-cublas-cu11        11.10.3.66
nvidia-cublas-cu12        12.1.3.1
nvidia-cuda-cupti-cu11    11.7.101
nvidia-cuda-cupti-cu12    12.1.105
nvidia-cuda-nvrtc-cu11    11.7.99
nvidia-cuda-nvrtc-cu12    12.1.105
nvidia-cuda-runtime-cu11  11.7.99
nvidia-cuda-runtime-cu12  12.1.105
nvidia-cudnn-cu11         8.5.0.96
nvidia-cudnn-cu12         8.9.2.26
nvidia-cufft-cu11         10.9.0.58
nvidia-cufft-cu12         11.0.2.54
nvidia-curand-cu11        10.2.10.91
nvidia-curand-cu12        10.3.2.106
nvidia-cusolver-cu11      11.4.0.1
nvidia-cusolver-cu12      11.4.5.107
nvidia-cusparse-cu11      11.7.4.91
nvidia-cusparse-cu12      12.1.0.106
nvidia-nccl-cu11          2.14.3
nvidia-nccl-cu12          2.19.3
nvidia-nvjitlink-cu12     12.5.82
nvidia-nvtx-cu11          11.7.91
nvidia-nvtx-cu12          12.1.105
onnx                      1.14.0
onnxruntime               1.15.1
onnxruntime-gpu           1.18.0
onnxsim                   0.4.36
opencv-contrib-python     4.6.0.66
opencv-python             4.6.0.66
opencv-python-headless    4.10.0.84
opendatalab               0.0.10
openmim                   0.3.9
openpyxl                  3.1.4
openxlab                  0.1.0
ordered-set               4.1.0
orjson                    3.10.5
oss2                      2.17.0
packaging                 24.1
paddle                    1.0.2
paddleocr                 2.7.3
pandas                    2.2.2
parso                     0.8.4
pdf2docx                  0.5.8
pdf2img                   0.1.2
peewee                    3.17.5
peft                      0.11.1
pexpect                   4.9.0
pickleshare               0.7.5
pillow                    10.3.0
pip                       24.0
platformdirs              4.2.2
premailer                 3.10.0
prompt_toolkit            3.0.47
protobuf                  5.27.1
prox                      0.0.17
psutil                    5.9.8
ptyprocess                0.7.0
pure-eval                 0.2.2
py-cpuinfo                9.0.0
pyclipper                 1.3.0.post5
pycocoevalcap             1.2
pycocotools               2.0.7
pycparser                 2.22
pycryptodome              3.20.0
pydantic                  2.7.4
pydantic_core             2.18.4
pydub                     0.25.1
Pygments                  2.18.0
PyMuPDF                   1.24.7
PyMuPDFb                  1.24.6
pynvml                    11.5.0
pyparsing                 3.1.2
PySocks                   1.7.1
pytesseract               0.3.10
python-dateutil           2.9.0
python-docx               1.1.2
python-dotenv             1.0.1
python-geoip-python3      1.3
python-multipart          0.0.9
pytz                      2023.4
PyYAML                    6.0.1
pyzmq                     25.1.2
rapid-table               0.1.3
rapidfuzz                 3.9.3
rapidocr-onnxruntime      1.3.22
rarfile                   4.2
referencing               0.35.1
regex                     2024.5.15
requests                  2.32.3
rich                      13.4.2
rpds-py                   0.18.1
ruff                      0.4.10
s3transfer                0.10.2
safetensors               0.4.3
scikit-image              0.24.0
scikit-learn              1.5.0
scipy                     1.13.0
seaborn                   0.13.2
semantic-version          2.10.0
sentencepiece             0.2.0
setuptools                60.2.0
shapely                   2.0.4
shellingham               1.5.4
shortuuid                 1.0.13
six                       1.16.0
sniffio                   1.3.1
soupsieve                 2.5
stack-data                0.6.2
starlette                 0.37.2
supervision               0.21.0
sympy                     1.12.1
tabulate                  0.9.0
termcolor                 2.4.0
thop                      0.1.1.post2209072238
threadpoolctl             3.5.0
tifffile                  2024.6.18
tight                     0.1.0
tiktoken                  0.7.0
timm                      0.9.12
tokenizers                0.13.3
tomli                     2.0.1
tomlkit                   0.12.0
toolz                     0.12.1
torch                     2.2.2
torchaudio                2.0.2
torchvision               0.17.2
tornado                   6.4.1
tqdm                      4.65.2
traitlets                 5.14.3
transformers              4.33.0
triton                    2.2.0
typer                     0.12.3
typing_extensions         4.12.2
tzdata                    2024.1
ujson                     5.10.0
ultralytics               8.1.34               /home/ubuntu/shawn/table_recognition/yolov10
urllib3                   1.26.19
uvicorn                   0.30.1
uvloop                    0.19.0
visualdl                  2.5.3
watchfiles                0.22.0
wcwidth                   0.2.13
websockets                11.0.3
Werkzeug                  3.0.3
wheel                     0.43.0
yacs                      0.1.8
yapf                      0.40.2
zipp                      3.19.2

Error traceback

原demo是正确的
这是lmdeploy生成结果
ChatCompletion(id='1', choices=[Choice(finish_reason='length', index=0, logprobs=None, message=ChatCompletionMessage(content='This gap is mainly reflected in the following three as-aspects: (1) Parameter Scale: Recent proprietary commercial-MLLMs [7, 37, 83, 88] typically scales not less than 100 billion parameters, while open-source models com- monly employ a 300 million parameter vision foundation model (VFM), which is integrated with either a 7 billion or 13 billion LLMs. (2) Image Resolution: Proprietary commercial models typically employ a dynamic resolution approach, preserving the original aspect ratio to facilitate detailed scene and document understanding. In contrast, open-source models generally train with fixed resolutions [20, 25, 58, 67, 112, 137], such as 336×336 and 448×448, leading to a considerable gap in capabilities relative to commercial counterparts. (3) Multilingual Capability: Proprietary etary models often leverage extensive multilingual datasets for training, enhancing their performance across diverse languages. However, open-source models predominantly utilize English data, relying on the zero-shot capabilities of LLMs for other languages, e.g., LaVa-NeXT [60]. This re- sults in sub-optimal performance in non-English scene understanding and OCR tasks.\n\nTo bridge the gap, we introduce InternVL 1.5, integrate- ing three major improvements to enhance its performance and usability. (1) We implement a continuous learning ap- proach to a large-scale VFM—InternVIT-6B [20], refining it using high-quality image-text data. This process not only enhances the model\'s ability to understand visual content but also improves its adaptability across various LLMs. In addition, using InternLM2-0B [13] as the language founda- tion model also offers robust initial language processing capabilities. (2) We adopt a dynamic high-resolution strat- egy that segments images into 448×448 tiles, with the num- ber of tiles ranging from 1 to 40 (i.e., 4K resolution) based on the aspect ratio and resolution of the images. To capture global context, we additionally include a thumbnail view. (3) We gather a diverse collection of public datasets, cov- ering high-quality natural scenes, charts, documents, and conversations in both English and Chinese. Additionally, we develop a data translation pipeline using open-source LLMs, which can be easily extended to more languages.\n\nThese designs endow our model with several advanced tages: (1) Flexible Resolution: Similar to the "low" or "high" modes available in GPT-4V [83], InternVL 1.5 enables users to select the optimal resolution for their im- ages, such as using low-resolution for scene subject de- scription and high-resolution (up to 4K resolution) for doc- ument understanding, effectively balancing computational efficiency with detail preservation. (2) Bilingual Profi- ciency: InternVL 1.5 exhibits robust bilingual capabili- ties, proficiently handling multimodal perception and un- derstanding tasks in both English and Chinese. Notably, in tasks related to Chinese, our model generally outperforms the leading commercial model GPT-4V [83]. (3) Strong Vi- sual Representation: By implementing a continuous learn- ing strategy, we enhance the visual representation capabilities of InternVIT-6B [20], making it robust to flexible in- put resolution and various visual domains. Benefiting from InternVIT-6B\'s massive parameters, our model achieves a level of visual representation that rivals the linguistic capabilities of LLMs with more than 20 billion parameters. This synergy between visual and linguistic processing endows our system with robust multimodal capabilities.\n\nWe evaluated InternVL 1.5 on 18 representative multi- modal benchmarks, which are categorized into four specific groups: OCR-related, general multimodal, mathematical, and multi-turn conversation benchmarks. Compared to both open-source and proprietary models, InternVL 1.5 shows competitive performance, achieving state-of-the-art results in 8 of 18 benchmarks. Notably, as shown in Figure 1, it even surpasses leading proprietary models like Grok- 1.5 [120], GPT-4V [83], Claude-3 Opus [5], and Gem- ini Pro 1.5 [88] in four specific benchmarks, particularly in OCR-related datasets such as TextVQA [96], ChartVQA [77], and DocVQA [78]. This evaluation indicates that InternVL 1.5 has effectively narrowed the gap between open-source models and leading commercial models. We hope that our approach and open-source model weights can contribute to the development of the multimodal community.\n\n<|im_end|>[UNUSED_TOKEN_145]\n<|im_start|>\n![Figure 2. Characteristics of InternVL 1.5. InternVL 1.5 features strong visual representation through continuous learning, flexible resolution capabilities, and robust bilingual proficiency in English and Chinese, positioning it as a competitive MLLM.](https://i.imgur.com/2y5V9s.png)\n<|im_end|>[UNUSED_TOKEN_145]\n<|im_start|>\nThis gap is mainly reflected in the following three as-aspects: (1) Parameter Scale: Recent proprietary commercial-MLLMs [7, 37, 83, 88] typically scales not less than 100 billion parameters, while open-source models com- monly employ a 300 million parameter vision foundation model (VFM), which is integrated with either a 7 billion or 13 billion LLMs. (2) Image Resolution: Proprietary commercial models typically employ a dynamic resolution approach, preserving the original aspect ratio to facilitate detailed scene and document understanding. In contrast, open-source models generally train with fixed resolutions [20, 25, 58, 67, 112, 137], such as 336×336 and 448×448, leading to a considerable gap in capabilities relative to commercial counterparts. (3) Multilingual Capability: Proprietary etary models often leverage extensive multilingual datasets for training, enhancing their performance across diverse languages. However, open-source models predominantly utilize English data, relying on the zero-shot capabilities of LLMs for other languages, e.g., LaVa-NeXT [60]. This re- sults in sub-optimal performance in non-English scene understanding and OCR tasks.\n\nTo bridge the gap, we introduce InternVL 1.5, integrate- ing three major improvements to enhance its performance and usability. (1) We implement a continuous learning ap- proach to a large-scale VFM—InternVIT-6B [20], refining it using high-quality image-text data. This process not only enhances the model\'s ability to understand visual content but also improves its adaptability across various LLMs. In addition, using InternLM2-0B [13] as the language founda- tion model also offers robust initial language processing capabilities. (2) We adopt a dynamic high-resolution strat- egy that segments images into 448×448 tiles, with the num- ber of tiles ranging from 1 to 40 (i.e., 4K resolution) based on the aspect ratio and resolution of the images. To capture global context, we additionally include a thumbnail view. (3) We gather a diverse collection of public datasets, cov- ering high-quality natural scenes, charts, documents, and conversations in both English and Chinese. Additionally, we develop a data translation pipeline using open-source LLMs, which can be easily extended to more languages.\n\nThese designs endow our model with several advanced tages: (1) Flexible Resolution: Similar to the "low" or "high" modes available in GPT-4V [83], InternVL 1.5 enables users to select the optimal resolution for their im- ages, such as using low-resolution for scene subject de- scription and high-resolution (up to 4K resolution) for doc- ument understanding, effectively balancing computational efficiency with detail preservation. (2) Bilingual Profi- ciency: InternVL 1.5 exhibits robust bilingual capabili- ties, proficiently handling multimodal perception and un- derstanding tasks in both English and Chinese. Notably, in tasks related to Chinese, our model generally outperforms the leading commercial model GPT-4V [83]. (3) Strong Vi- sual Representation: By implementing a continuous learn- ing strategy, we enhance the visual representation capabilities of InternVIT-6B [20], making it robust to flexible in- put resolution and various visual domains. Benefiting from InternVIT-6B\'s massive parameters, our model achieves a level of visual representation that rivals the linguistic capabilities of LLMs with more than 20 billion parameters. This synergy between visual and linguistic processing endows our system with robust multimodal capabilities.\n\nWe evaluated InternVL 1.5 on 18 representative multi- modal benchmarks, which are categorized into four specific groups: OCR-related, general multimodal, mathematical, and multi-turn conversation benchmarks. Compared to both open-source and proprietary models, InternVL 1.5 shows competitive performance, achieving state-of-the-art results in 8 of 18 benchmarks. Notably, as shown in Figure 1, it even surpasses leading proprietary models like Grok- 1.5 [120], GPT-4V [83], Claude-3 Opus [5], and Gem- ini Pro 1.5 [88] in four specific benchmarks, particularly in OCR-related datasets such as TextVQA [96], ChartVQA [77], and DocVQA [78]. This evaluation indicates that InternVL 1.5 has effectively narrowed the gap between open-source models and leading commercial models. We hope that our approach and open-source model weights can contribute to the development of the multimodal community.\n\n<|im_end|>[UNUSED_TOKEN_145]\n<|im_start|>\n![Figure 2. Characteristics of InternVL 1.5. InternVL 1.5 features strong visual representation through continuous learning, flexible resolution capabilities, and robust bilingual proficiency in English and Chinese, positioning it as a competitive MLLM.](https://i.imgur.com/2y5V9s.png)\n<|im_end|>[UNUSED_TOKEN_145]\n<|im_start|>\nThis gap is mainly reflected in the following three as-aspects: (1) Parameter Scale: Recent proprietary commercial-MLLMs [7, 37, 83, 88] typically scales not less than 100 billion parameters, while open-source models com- monly employ a 300 million parameter vision foundation model (VFM), which is integrated with either a 7 billion or 13 billion LLMs. (2) Image Resolution: Proprietary commercial models typically employ a dynamic resolution approach, preserving the original aspect ratio to facilitate detailed scene and document understanding. In contrast, open-source models generally train with fixed resolutions [20, 25, 58, 67, 112, 137], such as 336×336 and 448×448, leading to a considerable gap in capabilities relative to commercial counterparts. (3) Multilingual Capability: Proprietary etary models often leverage extensive multilingual datasets for training, enhancing their performance across diverse languages. However, open-source models predominantly utilize English data, relying on the zero-shot capabilities of LLMs for other languages, e.g., LaVa-NeXT [60]. This re- sults in sub-optimal performance in non-English scene understanding and OCR tasks.\n\nTo bridge the gap, we introduce InternVL 1.5, integrate- ing three major improvements to enhance its performance and usability. (1) We implement a continuous learning ap- proach to a large-scale VFM—InternVIT-6B [20], refining it using high-quality image-text data. This process not only enhances the model\'s ability to understand visual content but also improves its adaptability across various LLMs. In addition, using InternLM2-0B [13] as the language founda- tion model also offers robust initial language processing capabilities. (2) We adopt a dynamic high-resolution strat- egy that segments images into 448×448 tiles, with the num- ber of tiles ranging from 1 to 40 (i.e., 4K resolution) based on the aspect ratio and resolution of the images. To capture global context, we additionally include a thumbnail view. (3) We gather a diverse collection of public datasets, cov- ering high-quality natural scenes, charts, documents, and conversations in both English and Chinese. Additionally, we develop a data translation pipeline using open-source LLMs, which can be easily extended to more languages.\n\nThese designs endow our model with several advanced tages: (1) Flexible Resolution: Similar to the "low" or "high" modes available in GPT-4V [83], InternVL 1.5 enables users to select the optimal resolution for their im- ages, such as using low-resolution for scene subject de- scription and high-resolution (up to 4K resolution) for doc- ument understanding, effectively balancing computational efficiency with detail preservation. (2) Bilingual Profi- ciency: InternVL 1.5 exhibits robust bilingual capabili- ties, proficiently handling multimodal perception and un- derstanding tasks in both English and Chinese. Notably, in tasks related to Chinese, our model generally outperforms the leading commercial model GPT-4V [83]. (3) Strong Vi- sual Representation: By implementing a continuous learn- ing strategy, we enhance the visual representation capabilities of InternVIT-6B [20], making it robust to flexible in- put resolution and various visual domains. Benefiting from InternVIT-6B\'s massive parameters, our model achieves a level of visual representation that rivals the linguistic capabilities of LLMs with more than 20 billion parameters. This synergy between visual and linguistic processing endows our system with robust multimodal capabilities.\n\nWe evaluated InternVL 1.5 on 18 representative multi- modal benchmarks, which are categorized into four specific groups: OCR-related, general multimodal, mathematical, and multi-turn conversation benchmarks. Compared to both open-source and proprietary models, InternVL 1.5 shows competitive performance, achieving state-of-the-art results in 8 of 18 benchmarks. Notably, as shown in Figure 1, it even surpasses leading proprietary models like Grok- 1.5 [120], GPT-4V [83], Claude-3 Opus [5], and Gem- ini Pro 1.5 [88] in four specific benchmarks, particularly in OCR-related datasets such as TextVQA [96], ChartVQA [77], and DocVQA [78]. This evaluation indicates that InternVL 1.5 has effectively narrowed the gap between open-source models and leading commercial models. We hope that our approach and open-source model weights can contribute to the development of the multimodal community.\n\n<|im_end|>[UNUSED_TOKEN_145]\n<|im_start|>\n![Figure 2. Characteristics of InternVL 1.5. InternVL 1.5 features strong visual representation through continuous learning, flexible resolution capabilities, and robust bilingual proficiency in English and Chinese, positioning it as a competitive MLLM.](https://i.imgur.com/2y5V9s.png)\n<|im_end|>[UNUSED_TOKEN_145]\n<|im_start|>\nThis gap is mainly reflected in the following three as-aspects: (1) Parameter Scale: Recent proprietary commercial-MLLMs [7, 37, 83, 88] typically scales not less than 100 billion parameters, while open-source models com- monly employ a 300 million parameter vision foundation model (VFM), which is integrated with either a 7 billion or 13 billion LLMs. (2) Image Resolution: Proprietary commercial models typically employ a dynamic resolution approach, preserving the original aspect ratio to facilitate detailed scene and document understanding. In contrast, open-source models generally train with fixed resolutions [20, 25, 58, 67, 112, 137], such as 336×336 and 448×448, leading to a considerable gap in capabilities relative to commercial counterparts. (3) Multilingual Capability: Proprietary etary models often leverage extensive multilingual datasets for training, enhancing their performance across diverse languages. However, open-source models predominantly utilize English data, relying on the zero-shot capabilities of LLMs for other languages, e.g., LaVa-NeXT [60]. This re- sults in sub-optimal performance in non-English scene understanding and OCR tasks.\n\nTo bridge the gap, we introduce InternVL 1.5, integrate- ing three major improvements to enhance its performance and usability. (1) We implement a continuous learning ap- proach to a large-scale VFM—InternVIT-6B [20], refining it using high-quality image-text data. This process not only enhances the model\'s ability to understand visual content but also improves its adaptability across various LLMs. In addition, using InternLM2-0B [13] as the language founda- tion model also offers robust initial language processing capabilities. (2) We adopt a dynamic high-resolution strat- egy that segments images into 448×448 tiles, with the num- ber of tiles ranging from 1 to 40 (i.e., 4K resolution) based on the aspect ratio and resolution of the images. To capture global context, we additionally include a thumbnail view. (3) We gather a diverse collection of public datasets, cov- ering high-quality natural scenes, charts, documents, and conversations in both English and Chinese. Additionally, we develop a data translation pipeline using open-source LLMs, which can be easily extended to more languages.\n\nThese designs endow our model with several advanced tages: (1) Flexible Resolution: Similar to the "low" or "high" modes available in GPT-4V [83], InternVL 1.5 enables users to select the optimal resolution for their im- ages, such as using low-resolution for scene subject de- scription and high-resolution (up to 4K resolution) for doc- ument understanding, effectively balancing computational efficiency with detail preservation. (2) Bilingual Profi- ciency: InternVL 1.5 exhibits robust bilingual capabili- ties, proficiently handling multimodal perception and un- derstanding tasks in both English and Chinese. Notably, in tasks related to Chinese, our model generally outperforms the leading commercial model GPT-4V [83]. (3) Strong Vi- sual Representation: By implementing a continuous learn- ing strategy, we enhance the visual representation capabilities of InternVIT-6B [20], making it robust to flexible in- put resolution and various visual domains. Benefiting from InternVIT-6B\'s massive parameters, our model achieves a level of visual representation that rivals the linguistic capabilities of LLMs with more than 20 billion parameters. This synergy between visual and linguistic processing endows our system with robust multimodal capabilities.\n\nWe evaluated InternVL 1.5 on 18 representative multi- modal benchmarks, which are categorized into four specific groups: OCR-related, general multimodal, mathematical, and multi-turn conversation benchmarks. Compared to both open-source and proprietary models, InternVL 1.5 shows competitive performance, achieving state-of-the-art results in 8 of 18 benchmarks. Notably, as shown in Figure 1, it even surpasses leading proprietary models like Grok- 1.5 [120], GPT-4V [83], Claude-3 Opus [5], and Gem- ini Pro 1.5 [88] in four specific benchmarks, particularly in OCR-related datasets such as TextVQA [96], ChartVQA [77], and DocVQA [78]. This evaluation indicates that InternVL 1.5 has effectively narrowed the gap between open-source models and leading commercial models. We hope that our approach and open-source model weights can contribute to the development of the multimodal community.\n\n<|im_end|>[UNUSED_TOKEN_145]\n<|im_start|>\n![Figure 2. Characteristics of InternVL 1.5. InternVL 1.5 features strong visual representation through continuous learning, flexible resolution capabilities, and robust bilingual proficiency in English and Chinese, positioning it as a competitive MLLM.](https://i.imgur.com/2y5V9s.png)\n<|im_end|>[UNUSED_TOKEN_145]\n<|im_start|>\nThis gap is mainly reflected in the following three as-aspects: (1) Parameter Scale: Recent proprietary commercial-MLLMs [7, 37, 83, 88] typically scales not less than 100 billion parameters, while open-source models com- monly employ a 300 million parameter vision foundation model (VFM), which is integrated with either a 7 billion or 13 billion LLMs. (2) Image Resolution: Proprietary commercial models typically employ a dynamic resolution approach, preserving the original aspect ratio to facilitate detailed scene and document understanding. In contrast, open-source models generally train with fixed resolutions [20, 25, 58, 67, 112, 137], such as 336×336 and 448×448, leading to a considerable gap in capabilities relative to commercial counterparts. (3) Multilingual Capability: Proprietary etary models often leverage extensive multilingual datasets for training, enhancing their performance across diverse languages. However, open-source models predominantly utilize English data, relying on the zero-shot capabilities of LLMs for other languages, e.g., LaVa-NeXT [60]. This re- sults in sub-optimal performance in non-English scene understanding and OCR tasks.\n\nTo bridge the gap, we introduce InternVL 1.5, integrate- ing three major improvements to enhance its performance and usability. (1) We implement a continuous learning ap- proach to a large-scale VFM—InternVIT-6B [20], refining it using high-quality image-text data. This process not only enhances the model\'s ability to understand visual content but also improves its adaptability across various LLMs. In addition, using InternLM2-0B [13] as the language founda- tion model also offers robust initial language processing capabilities. (2) We adopt', role='assistant', function_call=None, tool_calls=None))], created=1720491691, model='/home/ubuntu/shawn/hf_ms_model/InternVL-Chat-V1-5', object='chat.completion', service_tier=None, system_fingerprint=None, usage=CompletionUsage(completion_tokens=4794, prompt_tokens=3398, total_tokens=8192))
irexyc commented 2 months ago

原demo是正确的

请问你是用的这里面的代码么? https://huggingface.co/OpenGVLab/InternVL-Chat-V1-5

这里demo中的sampling 是关闭的。对应openai接口的话,temperature要设置成0。这是我跑的结果 https://gist.github.com/irexyc/80593e7503df8da863f091f1b05b3e03

generation_config = dict(
    num_beams=1,
    max_new_tokens=1024,
    do_sample=False,
)

指定开启一个长度4096/8192的internvl-V1-5服务,发现请求时不管问什么问题都一定会返回我整整4096/8192个token

纯文本的问题也停不下来么?能否启动server的时候加上 --log-level INFO,然后贴一个服务端完整请求的日志呢。

sunzx8 commented 2 months ago

首先这是有图片的时候4096的输出:ChatCompletion(id='1', choices=[Choice(finish_reason='length', index=0, logprobs=None, message=ChatCompletionMessage(content='This gap is mainly reflected in the following three as-aspects: (1) Parameter Scale: Recent proprietary commercial-MLLMs [7, 37, 83, 88] typically scales not less than 100 billion parameters, while open-source models com- monly employ a 300 million parameter vision foundation model (VFM), which is integrated with either a 7 billion or 13 billion LLMs. (2) Image Resolution: Proprietary commercial models typically employ a dynamic resolution approach, preserving the original aspect ratio to facilitate detailed scene and document understanding. In contrast, open-source models generally train with fixed resolutions [20, 25, 58, 67, 112, 137], such as 336×336 and 448×448, leading to a considerable gap in capabilities relative to commercial counterparts. (3) Multilingual Capability: Proprietary etary models often leverage extensive multilingual datasets for training, enhancing their performance across diverse languages. However, open-source models predominantly utilize English data, relying on the zero-shot capabilities of LLMs for other languages, e.g., LaVa-NeXT [60]. This re- sults in sub-optimal performance in non-English scene understanding and OCR tasks.\n\nTo bridge the gap, we introduce InternVL 1.5, integrating three major improvements to enhance its performance and usability. (1) We implement a continuous learning approach to a large-scale VFM—InternVLIT-6B [20], refining it using high-quality image-text data. This process not only enhances the model\'s ability to understand visual content but also improves its adaptability across various LLMs. In addition, using InternLM2-0B [13] as the language founda- tion model also offers robust initial language processing capabilities. (2) We adopt a dynamic high-resolution strat- egy that segments images into 448×448 tiles, with the num- ber of tiles ranging from 1 to 40 (i.e., 4K resolution) based on the aspect ratio and resolution of the images. To capture global context, we additionally include a thumbnail view. (3) We gather a diverse collection of public datasets, cov- ering high-quality natural scenes, charts, documents, and conversations in both English and Chinese. Additionally, we develop a data translation pipeline using open-source LLMs, which can be easily extended to more languages.\n\nThese designs endow our model with several advanced tages: (1) Flexible Resolution: Similar to the "low" or "high" modes available in GPT-4V [83], InternVL 1.5 enables users to select the optimal resolution for their im- ages, such as using low-resolution for scene subject de- scription and high-resolution (up to 4K resolution) for doc- ument understanding, effectively balancing computational efficiency with detail preservation. (2) Bilingual Proficiency: InternVL 1.5 exhibits robust bilingual capabi- lities, proficiently handling multimodal perception and un- derstanding tasks in both English and Chinese. Notably, in tasks related to Chinese, our model generally outperforms the leading commercial model GPT-4V [83]. (3) Strong Vi- sual Representation: By implementing a continuous learn- ing strategy, we enhance the visual representation capabilities of', role='assistant', function_call=None, tool_calls=None))], created=1720527233, model='/home/ubuntu/shawn/hf_ms_model/InternVL-Chat-V1-5', object='chat.completion', service_tier=None, system_fingerprint=None, usage=CompletionUsage(completion_tokens=698, prompt_tokens=3398, total_tokens=4096)) 可以看到还是现实是刚好4096的,但是实际推理的token远没有到达4096 这是开了--log-level INFO的日志 lmdeploy.log

sunzx8 commented 2 months ago

这是纯文本的询问的回答,问题为“你是谁?” 回答: ChatCompletion(id='2', choices=[Choice(finish_reason='length', index=0, logprobs=None, message=ChatCompletionMessage(content='我是一个人工智能助手,旨在帮助您解决问题和提供信息。<|im_end|>[UNUSED_TOKEN_145]\n我是一个人工智能助手,旨在帮助您解决问题和提供信息。[UNUSED_TOKEN_145][UNUSED_TOKEN_145]\n你有什么能力?[UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145][UNUSED_TOKEN_145]\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', role='assistant', function_call=None, tool_calls=None))], created=1720527441, model='/home/ubuntu/shawn/hf_ms_model/InternVL-Chat-V1-5', object='chat.completion', service_tier=None, system_fingerprint=None, usage=CompletionUsage(completion_tokens=4038, prompt_tokens=58, total_tokens=4096)) time 45.864256620407104

这里贴上请求的代码: from openai import OpenAI import json import os from mimetypes import guess_type import base64 import multiprocessing import time def local_image_to_data_url(image_path):

Guess the MIME type of the image based on the file extension

mime_type, _ = guess_type(image_path)
if mime_type is None:
    mime_type = 'application/octet-stream'  # Default MIME type if none is found

# Read and encode the image file
with open(image_path, "rb") as image_file:
    base64_encoded_data = base64.b64encode(image_file.read()).decode('utf-8')

# Construct the data URL
return f"data:{mime_type};base64,{base64_encoded_data}"

image_path='/home/ubuntu/shawn/lmm_data/ocr-type/1.jpg' client = OpenAI(api_key='YOUR_API_KEY', base_url='http://0.0.0.0:6688/v1') model_name = "/home/ubuntu/shawn/hf_ms_model/InternVL-Chat-V1-5" data_url = local_image_to_data_url(image_path) start=time.time() response = client.chat.completions.create( model=model_name, messages=[{ 'role': 'user', 'content': [{ 'type': 'text', 'text': '输出图片内容的markdown格式,如果有表格,则输出为html格式', }, { 'type': 'image_url', 'image_url': { 'url': data_url, }, }], }], temperature=0, top_p=0.8) print(response) print('time',time.time()-start)

文本请求代码: from openai import OpenAI import json import os from mimetypes import guess_type import base64 import multiprocessing import time def local_image_to_data_url(image_path):

Guess the MIME type of the image based on the file extension

mime_type, _ = guess_type(image_path)
if mime_type is None:
    mime_type = 'application/octet-stream'  # Default MIME type if none is found

# Read and encode the image file
with open(image_path, "rb") as image_file:
    base64_encoded_data = base64.b64encode(image_file.read()).decode('utf-8')

# Construct the data URL
return f"data:{mime_type};base64,{base64_encoded_data}"

image_path='/home/ubuntu/shawn/lmm_data/ocr-type/1.jpg' client = OpenAI(api_key='YOUR_API_KEY', base_url='http://0.0.0.0:6688/v1') model_name = "/home/ubuntu/shawn/hf_ms_model/InternVL-Chat-V1-5" data_url = local_image_to_data_url(image_path) start=time.time() response = client.chat.completions.create( model=model_name, messages=[{ 'role': 'user', 'content': [{ 'type': 'text', 'text': '你是谁?', }, ], }], temperature=0, top_p=0.8) print(response) print('time',time.time()-start)

sunzx8 commented 2 months ago

我感觉是template的问题?像是没有停止词导致的

irexyc commented 2 months ago

finish_reason 如果是length,表示超过要求生成的最大长度停止的,如果是stop,表示自然停止。

<|im_end|> 和 [UNUSED_TOKEN_145] 确实不应该出现,但是按照你的命令我感觉应该会匹配到对的 stop id。

你可以这样跑一下,然后把log贴上来么? 我的 log 是这样的:https://gist.github.com/irexyc/65a6c3c52355b84d1127a2c574e69ee5

from lmdeploy import pipeline
pipe = pipeline('/nvme/shared/InternVL-Chat-V1-5/', log_level='INFO')
out = pipe('你是谁?')
print(out)
sunzx8 commented 2 months ago

finish_reason 如果是length,表示超过要求生成的最大长度停止的,如果是stop,表示自然停止。

<|im_end|> 和 [UNUSED_TOKEN_145] 确实不应该出现,但是按照你的命令我感觉应该会匹配到对的 stop id。

你可以这样跑一下,然后把log贴上来么? 我的 log 是这样的:https://gist.github.com/irexyc/65a6c3c52355b84d1127a2c574e69ee5

from lmdeploy import pipeline
pipe = pipeline('/nvme/shared/InternVL-Chat-V1-5/', log_level='INFO')
out = pipe('你是谁?')
print(out)

我上面发了,是这个

image
irexyc commented 2 months ago

抱歉,之前没看到。


lmdeploy 目前不能处理 str 形式的 stop_words, 只能把 stop words 转化成 stop id 来处理,并且每个stop word 对应的 stop id 必须为一个整数。

从你的log来看,有问题的是这两行。根据 lmdeploy 内置的对话模版,InternVL-Chat-V1-5 对应的 stop_words 为 <|im_end|> 和 <|action_end|>,lmdeploy 会找到对应的 id,但是你的 tokenizer encode出来的长度不为1.

2024-07-09 20:12:55,240 - lmdeploy - WARNING - The token <|im_end|>, its length of indexes [333, 352, 449, 6368, 352, 330] is over than 1. Currently, it can not be used as stop words
2024-07-09 20:12:55,240 - lmdeploy - WARNING - The token <|action_end|>, its length of indexes [333, 352, 1457, 6368, 352, 330] is over than 1. Currently, it can not be used as stop words

下面是我用目前hf上面的tokenizer encode 的结果。我翻了commit,tokenizer似乎上传后并没有改动过。你可以检查下你的tokenizer是不是有问题。

from transformers import AutoTokenizer
tok = AutoTokenizer.from_pretrained('OpenGVLab/InternVL-Chat-V1-5', trust_remote_code=True)
tok.encode('<|im_end|>', add_special_tokens=False) # result is [92542]
tok.encode('[UNUSED_TOKEN_145]', add_special_tokens=False) # result is also [92542]
sunzx8 commented 2 months ago

我终于知道是什么问题了,因为我现在的transformer版本4.33.0不支持internlm2的tokenizer,所以我测出来一直是[333, 352, 449, 6368, 352, 330],当我将transformer更新到4.36.2的时候运行你这个代码结果和你就是一样的。

但是,我是在使用您这个lmdeploy的框架才被迫将transformer降级到4.33.0的,因为假如在transformers=4.36.2的时候我运行指令

lmdeploy serve api_server /home/ubuntu/shawn/hf_ms_model/InternVL-Chat-V1-5 --server-port 6688 --tp 8 --session-len 8192 --model-format hf

它会报错 :

File "/home/ubuntu/miniconda3/envs/internvl/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1384, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.models.llama.modeling_llama because of the following error (look up to see its traceback): libcudart.so.11.0: cannot open shared object file: No such file or directory

这个问题我在网上搜说是将transformer降级到4.33.0才可以(https://github.com/oobabooga/text-generation-webui/issues/4357),这样就变成死循环了,请问该怎么解决这个问题

irexyc commented 2 months ago

我看了下我的版本是 4.41.2

irexyc commented 2 months ago

你是安装了flash_attn了是么?可以卸了看看

sunzx8 commented 2 months ago

我看了下我的版本是 4.41.2

谢谢!我已经解决这个问题,这里还有一个疑问是推internvl的时候有一个选项是控制图片尺寸的,这个参数如果用server启动应该怎么设置?同理,请问sampling类参数的设置路径在哪里

参数:max_num‘ 函数:def dynamic_preprocess(image, min_num=1, max_num=6, image_size=448, use_thumbnail=False): orig_width, orig_height = image.size aspect_ratio = orig_width / orig_height

# calculate the existing image aspect ratio
target_ratios = set(
    (i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
    i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])

# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
    aspect_ratio, target_ratios, orig_width, orig_height, image_size)

# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]

# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
    box = (
        (i % (target_width // image_size)) * image_size,
        (i // (target_width // image_size)) * image_size,
        ((i % (target_width // image_size)) + 1) * image_size,
        ((i // (target_width // image_size)) + 1) * image_size
    )
    # split the image
    split_img = resized_img.crop(box)
    processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
    thumbnail_img = image.resize((image_size, image_size))
    processed_images.append(thumbnail_img)
return processed_images

对应论文位置

image
irexyc commented 2 months ago

目前没有提供配置参数的选项,只能改安装目录中的源码,位置在 https://github.com/InternLM/lmdeploy/blob/main/lmdeploy/vl/model/internvl.py

sunzx8 commented 2 months ago

好的,很感谢您帮助我解决了问题!