xenova / transformers.js

State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
https://huggingface.co/docs/transformers.js
Apache License 2.0
10.11k stars 592 forks source link

Converting a model to onnx using given script is hard(fails most of the time) #543

Open bajrangCoder opened 5 months ago

bajrangCoder commented 5 months ago

Question

I have tried to use starcoder model by bundling it using your ONNX script but it failed with some exception.

Model: https://huggingface.co/HuggingFaceH4/starchat-beta or https://huggingface.co/bigcode/starcoderbase

logs:

$ python -m scripts.convert --quantize --model_id HuggingFaceH4/starchat-beta
Framework not specified. Using pt to export to ONNX.
model-00001-of-00004.safetensors:   3%|█▏                               | 346M/9.96G [03:20<1:33:01, 1.72MB/s]
Downloading shards:   0%|                                                               | 0/4 [03:23<?, ?it/s]
Loading TensorFlow model in PyTorch before exporting.
Traceback (most recent call last):
  File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/urllib3/response.py", line 712, in _error_catcher
    yield
  File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/urllib3/response.py", line 833, in _raw_read
    raise IncompleteRead(self._fp_bytes_read, self.length_remaining)
urllib3.exceptions.IncompleteRead: IncompleteRead(351738674 bytes read, 9606258302 more expected)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/requests/models.py", line 816, in generate
    yield from self.raw.stream(chunk_size, decode_content=True)
  File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/urllib3/response.py", line 934, in stream
    data = self.read(amt=amt, decode_content=decode_content)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/urllib3/response.py", line 905, in read
    data = self._raw_read(amt)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/urllib3/response.py", line 811, in _raw_read
    with self._error_catcher():
  File "/usr/lib/python3.11/contextlib.py", line 155, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/urllib3/response.py", line 729, in _error_catcher
    raise ProtocolError(f"Connection broken: {e!r}", e) from e
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(351738674 bytes read, 9606258302 more expected)', IncompleteRead(351738674 bytes read, 9606258302 more expected))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/optimum/exporters/tasks.py", line 1708, in get_model_from_task
    model = model_class.from_pretrained(model_name_or_path, **kwargs)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 563, in from_pretrained
    return model_class.from_pretrained(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2876, in from_pretrained
    resolved_archive_file, sharded_metadata = get_checkpoint_shard_files(
                                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/transformers/utils/hub.py", line 1040, in get_checkpoint_shard_files
    cached_filename = cached_file(
                      ^^^^^^^^^^^^
  File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/transformers/utils/hub.py", line 429, in cached_file
    resolved_file = hf_hub_download(
                    ^^^^^^^^^^^^^^^^
  File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1457, in hf_hub_download
    http_get(
  File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 524, in http_get
    for chunk in r.iter_content(chunk_size=DOWNLOAD_CHUNK_SIZE):
  File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/requests/models.py", line 818, in generate
    raise ChunkedEncodingError(e)
requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(351738674 bytes read, 9606258302 more expected)', IncompleteRead(351738674 bytes read, 9606258302 more expected))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/home/username/Desktop/transformers.js/scripts/convert.py", line 465, in <module>
    main()
  File "/home/username/Desktop/transformers.js/scripts/convert.py", line 426, in main
    main_export(**export_kwargs)
  File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/optimum/exporters/onnx/__main__.py", line 323, in main_export
    model = TasksManager.get_model_from_task(
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/optimum/exporters/tasks.py", line 1717, in get_model_from_task
    model = model_class.from_pretrained(model_name_or_path, **kwargs)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 563, in from_pretrained
    return model_class.from_pretrained(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/username/Desktop/transformers.js/scripts/venv/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2846, in from_pretrained
    raise EnvironmentError(
OSError: HuggingFaceH4/starchat-beta does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.
bajrangCoder commented 5 months ago

also wizardlm/math is not working