huggingface / transformers.js

State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
https://huggingface.co/docs/transformers.js
Apache License 2.0
12.2k stars 775 forks source link

script.convert tfjs model to onnx support #1038

Open JohnRSim opened 1 week ago

JohnRSim commented 1 week ago

Question

I'm using tfjs-node to create an image-classifier model; but I'm stuck with how to convert model.json to a format that can be used by optimum or script.convert to convert it to a onnx file.

I'm able to convert to a graph model using

tensorflowjs_converter --input_format=tfjs_layers_model \  --output_format=tfjs_graph_model \  ./saved-model/layers-model/model.json \  ./saved-model/graph-model

and then I can convert to an onnx using

python3 -m tf2onnx.convert --tfjs ./saved-model/graph-model/model.json --output ./saved-model/model.onnx

This works fine when I test in python but I'm unable to use in transformers.js - I probably need to use optimum to convert it? I tried a number of approaches but was unable to convert to onnx - I then saw script.convert but am having difficulties

Load the ONNX model

session = ort.InferenceSession('./saved-model/model.onnx')

Get input and output names

input_name = session.get_inputs()[0].name output_name = session.get_outputs()[0].name

Load and preprocess the image

img = Image.open('./training_images/shirt/00e745c9-97d9-429d-8c3f-d3db7a2d2991.jpg').resize((128, 128)) img_array = np.array(img).astype(np.float32) / 255.0 # Normalize pixel values to [0, 1] img_array = np.expand_dims(img_array, axis=0) # Add batch dimension

Run inference

outputs = session.run([output_name], {input_name: img_array}) print(f"Inference outputs: {outputs}")



[Uploading model.onnx.txt…]()

Any guidance on how to go from tfjs model.json to onnx supported by transformers.js would really help me out.
Thanks! 
xenova commented 1 week ago

Hi there 👋 which model are you trying to convert? Also, can you provide the transformers.js code you are trying to run?

Note that our conversion script is only built for Hugging Face transformers models (and not just arbitrary conversion)

JohnRSim commented 1 week ago

Ah.. thanks Xenova.

I created a custom image-classifier model with tfjs-node - attached the model.onnx with txt extension in prior msg.

Let me grab and share shortly the code it's pretty basic.

JohnRSim commented 1 week ago

This is what I'm using to validate test the onnx generated: validate_onnx.py.txt test_image.py.txt

I'm generating the model using tfjs-node generate.js.txt

transformers.js to test with (not working) test.js.txt

And then I was playing around with web worker and your latest on ms-florence example and seeing if I could find tune with the custom images. (wip) customVision.js.txt

here is an image in the training model I was using to test against. 00e745c9-97d9-429d-8c3f-d3db7a2d2991

If there are any guides you can point me to - I just want to create a custom mini image classifier ideally with node convert it to onnx use transformers.js and pass images through it to return a classified label.

config.json

{
  "model_type": "vit",
  "hidden_size": 768,
  "num_attention_heads": 12,
  "num_hidden_layers": 12,
  "intermediate_size": 3072,
  "hidden_act": "gelu",
  "hidden_dropout_prob": 0.1,
  "attention_probs_dropout_prob": 0.1,
  "image_size": 128,
  "patch_size": 16,
  "num_channels": 3,
  "num_labels": 2
}

preprocessor_config.json

{
    "feature_extractor_type": "ViTFeatureExtractor",
    "image_mean": [0.5, 0.5, 0.5],
    "image_std": [0.5, 0.5, 0.5],
    "size": 128
}
xenova commented 1 week ago

Hmm looks like the link to the model is broken: image

Feel free to upload it to the Hugging Face Hub for easier transferring (https://huggingface.co/new)

JohnRSim commented 1 week ago

Thanks @xenova

I've dropped the files into here: https://huggingface.co/jrsimuix/issue1038