Open JohnRSim opened 1 week ago
Hi there 👋 which model are you trying to convert? Also, can you provide the transformers.js code you are trying to run?
Note that our conversion script is only built for Hugging Face transformers models (and not just arbitrary conversion)
Ah.. thanks Xenova.
I created a custom image-classifier model with tfjs-node - attached the model.onnx with txt extension in prior msg.
Let me grab and share shortly the code it's pretty basic.
This is what I'm using to validate test the onnx generated: validate_onnx.py.txt test_image.py.txt
I'm generating the model using tfjs-node generate.js.txt
transformers.js to test with (not working) test.js.txt
And then I was playing around with web worker and your latest on ms-florence example and seeing if I could find tune with the custom images. (wip) customVision.js.txt
here is an image in the training model I was using to test against.
If there are any guides you can point me to - I just want to create a custom mini image classifier ideally with node convert it to onnx use transformers.js and pass images through it to return a classified label.
config.json
{
"model_type": "vit",
"hidden_size": 768,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"intermediate_size": 3072,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"attention_probs_dropout_prob": 0.1,
"image_size": 128,
"patch_size": 16,
"num_channels": 3,
"num_labels": 2
}
preprocessor_config.json
{
"feature_extractor_type": "ViTFeatureExtractor",
"image_mean": [0.5, 0.5, 0.5],
"image_std": [0.5, 0.5, 0.5],
"size": 128
}
Hmm looks like the link to the model is broken:
Feel free to upload it to the Hugging Face Hub for easier transferring (https://huggingface.co/new)
Thanks @xenova
I've dropped the files into here: https://huggingface.co/jrsimuix/issue1038
Question
I'm using tfjs-node to create an image-classifier model; but I'm stuck with how to convert model.json to a format that can be used by optimum or script.convert to convert it to a onnx file.
I'm able to convert to a graph model using
and then I can convert to an onnx using
This works fine when I test in python but I'm unable to use in transformers.js - I probably need to use optimum to convert it? I tried a number of approaches but was unable to convert to onnx - I then saw script.convert but am having difficulties
Load the ONNX model
session = ort.InferenceSession('./saved-model/model.onnx')
Get input and output names
input_name = session.get_inputs()[0].name output_name = session.get_outputs()[0].name
Load and preprocess the image
img = Image.open('./training_images/shirt/00e745c9-97d9-429d-8c3f-d3db7a2d2991.jpg').resize((128, 128)) img_array = np.array(img).astype(np.float32) / 255.0 # Normalize pixel values to [0, 1] img_array = np.expand_dims(img_array, axis=0) # Add batch dimension
Run inference
outputs = session.run([output_name], {input_name: img_array}) print(f"Inference outputs: {outputs}")