Closed KudzayiKing closed 3 years ago
In the getTopKClasses
function, can you do add a console.log(values.length);
? The error you see typically indicates that the class that corresponds to the index is undefined. This generally happens when there is a mismatch between the lengths of the output softmax array and the number of possible classes. So, the value you should see for console.log(values.length);
should be 38 assuming the app is loading the correct model and the model was trained properly so that the output shape is 38.
@pvaneck Thank you for the quick reply, the console shows 1000
Sounds like you are still using the original MobileNet model since the model prediction output is giving you 1000 class probabilities. Double check to make sure that the updated model files are in the public/model
directory. Might also be worthwhile to double check the output of your Keras model in python to ensure that the output size is 38.
@pvaneck I ran model.summary()
on the model.h5
, output is 38
, and yes I updated the files in public/model
and scr/public/model/classes.js
. I also saved one version of the model with ImageDataGeneration
class_mode="categorical"
with loss="CategoricalCrossentropy"
and another one with class_mode="sparse"
with loss="SparseCategoricalCrossentropy"
nothing changed both versions did not show classes.
@pvaneck another thing i noticed is at {this.state.predictions.map((category) => { console.log('category:', category)
shows this in the console
Can you also do a this.model.summary()
somewhere and see what the output shape is? Can just put it near where you put console.log(values.length);
. If the model is correct and was converted properly, output shape should be [null, 38]
. A shape of [null, 1000]
seems the indicate the original Mobilenet with the 1000 ImageNet classes are being used.
@pvaneck Yes your suspicions are right ! it shows [null,1000]
in the console. Guess i need to get back to saving the Model .Is this caused by tfjs-converter
or i need to specifically save the model with specifying signatures during export
https://www.tensorflow.org/guide/saved_model#specifying_signatures_during_export
You were exporting your model in the h5 format, right? Those signatures just apply to the TF SavedModel format. Converting an h5 file is outlined here: https://www.tensorflow.org/js/tutorials/conversion/import_keras (just make sure you use the correct plant leaf model when converting via tensorflowjs_converter --input_format=keras ./path/to/plant-leaf-model.h5 ./my-model
.
@pvaneck yes I am exporting h5
the converting it to .js
directly using tfjs.converters.save_keras_model(model, tfjs_target_dir)
and i am also saving the h5
the converting it to .js
somewhere else . I have uploaded the h5
file to cross check output shape what i am getting is [null,38]
And this is the h5
I have converted and put in my public/model
. Console still shows [null,1000]
after running Classify
So I can't understand were things are going wrong .
@pvaneck I was just going through some images testing , a couple images actually have their classes appearing in the app , as in picture below. Why am I getting the feeling that the converter
is not converting the model properly .Maybe I should use a previous version of the converter ?
Can you show the python code that constructs and saves your h5 model? That model.summary in the console only looks like the base mobilenet model layers.
I tried reconstructing a model similar to yours:
base_model = tf.keras.applications.MobileNet(
input_shape=(224, 224, 3),
include_top=False, weights='imagenet',
classifier_activation='softmax'
)
base_model.trainable = False
input_1 = tf.keras.layers.Input(shape=(224,224,3))
x = base_model(input_1)
x = tf.keras.layers.GlobalAveragePooling2D()(x)
x = tf.keras.layers.Dropout(0)(x)
x = tf.keras.layers.Dense(38, activation='softmax')(x)
model = tf.keras.models.Model(inputs=input_1, outputs=x)
model.summary()
tfjs.converters.save_keras_model(model, './tfjs-output')
This gives a model summary just like yours:
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 224, 224, 3)] 0
_________________________________________________________________
mobilenet_1.00_224 (Function (None, 7, 7, 1024) 3228864
_________________________________________________________________
global_average_pooling2d (Gl (None, 1024) 0
_________________________________________________________________
dropout (Dropout) (None, 1024) 0
_________________________________________________________________
dense (Dense) (None, 38) 38950
=================================================================
Total params: 3,267,814
Trainable params: 38,950
Non-trainable params: 3,228,864
The converted model loaded in tensorflow looks to be the same as expected:
_________________________________________________________________
tfjs@2.4.0:17 Layer (type) Output shape Param #
tfjs@2.4.0:17 =================================================================
tfjs@2.4.0:17 input_2 (InputLayer) [null,224,224,3] 0
tfjs@2.4.0:17 _________________________________________________________________
tfjs@2.4.0:17 mobilenet_1.00_224 (Function [null,7,7,1024] 3228864
tfjs@2.4.0:17 _________________________________________________________________
tfjs@2.4.0:17 global_average_pooling2d (Gl [null,1024] 0
tfjs@2.4.0:17 _________________________________________________________________
tfjs@2.4.0:17 dropout (Dropout) [null,1024] 0
tfjs@2.4.0:17 _________________________________________________________________
tfjs@2.4.0:17 dense (Dense) [null,38] 38950
tfjs@2.4.0:17 =================================================================
tfjs@2.4.0:17 Total params: 3267814
tfjs@2.4.0:17 Trainable params: 38950
tfjs@2.4.0:17 Non-trainable params: 3228864
tfjs@2.4.0:17 _________________________________________________________________
So, I would double check your model saving to ensure you are not just saving the base model.
Sidenote: In trying this, seems like we need a minimum TF version of 2.4.0 (for Functional layer support) but also a version less than 2.8.0 (just released and it seems to introduce a bug that breaks the webcam component of the app). So I will probably submit a PR updating the package.json.
My tests were with tensorflowjs==2.8.0 (Python) and @tensorflow/tfjs==2.7.0 (JavaScript).
@pvaneck Thanks , here is the full PYTHON CODE take a look , while I cross check the versions !let me know if there are any errors in the python
code !
@KudzayiKing The code seems to look fine and should export a working model. Hmm, now that I think about it, perhaps the app keeps using an old model your previously loaded and is not picking up any updated models you put in public/model
. By default the app will keep fetching a saved model in IndexedDB in the browser (giving it offline functionality). To force the app to load the latest model, either update model_info.txt
with a newer date (date > model_info.txt
) or you can just delete the indexeddb tensorflowjs database in the browser. Reloading should then pull in the new model.
@pvaneck Yes it worked !everything is now working well ! I just deleted the tensorflowjs database
.Thanks for the support. Please check your email!
HI! I made a plant leaf diseases classification using transfer learning MobileNet, i saved my model as h5 the converted it into .js, in the app i changed classes.js to fit the SoftMax of my model which was 38 classes, When I run the app it does not show the classes , it jus shows the probabilities , but interestingly when i put classes.js with ImageNet classes , classes appear in app .where a I going wrong ? I have attaches files below . @pvaneck @xgqfrms @autumnblue @joshpitzalis @jdhiro @keithort @Diaver
const INDEXEDDB_DB = 'tensorflowjs'; const INDEXEDDB_STORE = 'model_info_store'; const INDEXEDDB_KEY = 'web-model';
/**
@extends React.Component */ export default class Classify extends Component {
constructor(props) { super(props);
this.webcam = null; this.model = null; this.modelLastUpdated = null;
this.state = { modelLoaded: false, filename: '', isModelLoading: false, isClassifying: false, predictions: [], photoSettingsOpen: true, modelUpdateAvailable: false, showModelUpdateAlert: false, showModelUpdateSuccess: false, isDownloadingModel: false }; }
async componentDidMount() { if (('indexedDB' in window)) { try { this.model = await tf.loadLayersModel('indexeddb://' + INDEXEDDB_KEY);
} // If error here, assume that the object store doesn't exist and the model currently isn't // saved in IndexedDB. catch (error) { console.log('Not found in IndexedDB. Loading and saving...'); console.log(error); this.model = await tf.loadLayersModel(MODEL_PATH); await this.model.save('indexeddb://' + INDEXEDDB_KEY); } } // If no IndexedDB, then just download like normal. else { console.warn('IndexedDB not supported.'); this.model = await tf.loadLayersModel(MODEL_PATH); }
this.setState({ modelLoaded: true }); this.initWebcam();
// Warm up model. let prediction = tf.tidy(() => this.model.predict(tf.zeros([1, IMAGE_SIZE, IMAGE_SIZE, 3]))); prediction.dispose(); }
async componentWillUnmount() { if (this.webcam) { this.webcam.stop(); }
// Attempt to dispose of the model. try { this.model.dispose(); } catch (e) { // Assume model is not loaded or already disposed. } }
initWebcam = async () => { try { this.webcam = await tf.data.webcam( this.refs.webcam, {resizeWidth: CANVAS_SIZE, resizeHeight: CANVAS_SIZE, facingMode: 'environment'} ); } catch (e) { this.refs.noWebcam.style.display = 'block'; } }
startWebcam = async () => { if (this.webcam) { this.webcam.start(); } }
stopWebcam = async () => { if (this.webcam) { this.webcam.stop(); } }
getModelInfo = async () => { await fetch(
${config.API_ENDPOINT}/model_info
, { method: 'GET', }) .then(async (response) => { await response.json().then((data) => { this.modelLastUpdated = data.last_updated; }) .catch((err) => { console.log('Unable to get parse model info.'); }); }) .catch((err) => { console.log('Unable to get model info'); }); }updateModel = async () => { // Get the latest model from the server and refresh the one saved in IndexedDB. console.log('Updating the model: ' + INDEXEDDB_KEY); this.setState({ isDownloadingModel: true }); this.model = await tf.loadLayersModel(MODEL_PATH); await this.model.save('indexeddb://' + INDEXEDDB_KEY); this.setState({ isDownloadingModel: false, modelUpdateAvailable: false, showModelUpdateAlert: false, showModelUpdateSuccess: true }); }
classifyLocalImage = async () => { this.setState({ isClassifying: true });
const croppedCanvas = this.refs.cropper.getCroppedCanvas(); const image = tf.tidy( () => tf.browser.fromPixels(croppedCanvas).toFloat());
// Process and resize image before passing in to model. const imageData = await this.processImage(image); const resizedImage = tf.image.resizeBilinear(imageData, [IMAGE_SIZE, IMAGE_SIZE]);
const logits = this.model.predict(resizedImage); const probabilities = await logits.data(); const preds = await this.getTopKClasses(probabilities, TOPK_PREDICTIONS);
this.setState({ predictions: preds, isClassifying: false, photoSettingsOpen: !this.state.photoSettingsOpen });
// Draw thumbnail to UI. const context = this.refs.canvas.getContext('2d'); const ratioX = CANVAS_SIZE / croppedCanvas.width; const ratioY = CANVAS_SIZE / croppedCanvas.height; const ratio = Math.min(ratioX, ratioY); context.clearRect(0, 0, CANVAS_SIZE, CANVAS_SIZE); context.drawImage(croppedCanvas, 0, 0, croppedCanvas.width ratio, croppedCanvas.height ratio);
// Dispose of tensors we are finished with. image.dispose(); imageData.dispose(); resizedImage.dispose(); logits.dispose(); }
classifyWebcamImage = async () => { this.setState({ isClassifying: true });
const imageCapture = await this.webcam.capture();
const resized = tf.image.resizeBilinear(imageCapture, [IMAGE_SIZE, IMAGE_SIZE]); const imageData = await this.processImage(resized); const logits = this.model.predict(imageData); const probabilities = await logits.data(); const preds = await this.getTopKClasses(probabilities, TOPK_PREDICTIONS);
this.setState({ predictions: preds, isClassifying: false, photoSettingsOpen: !this.state.photoSettingsOpen });
// Draw thumbnail to UI. const tensorData = tf.tidy(() => imageCapture.toFloat().div(255)); await tf.browser.toPixels(tensorData, this.refs.canvas);
// Dispose of tensors we are finished with. resized.dispose(); imageCapture.dispose(); imageData.dispose(); logits.dispose(); tensorData.dispose(); }
processImage = async (image) => { return tf.tidy(() => image.expandDims(0).toFloat().div(127).sub(1)); }
/**
const topClassesAndProbs = []; for (let i = 0; i < topkIndices.length; i++) { topClassesAndProbs.push({ className: MODEL_CLASSES[topkIndices[i]], probability: (topkValues[i] * 100).toFixed(2) }); } return topClassesAndProbs; }
handlePanelClick = event => { this.setState({ photoSettingsOpen: !this.state.photoSettingsOpen }); }
handleFileChange = event => { if (event.target.files && event.target.files.length > 0) { this.setState({ file: URL.createObjectURL(event.target.files[0]), filename: event.target.files[0].name }); } }
handleTabSelect = activeKey => { switch(activeKey) { case 'camera': this.startWebcam(); break; case 'localfile': this.setState({filename: null, file: null}); this.stopWebcam(); break; default: } }
render() { return (
Please use a device with a camera, or upload an image instead.
Predictions
); } }`
classes.js `/ eslint-disable / // Theses classes should correspond to the softmax output of your model.
export const MODEL_CLASSES = { 0: 'Apple_Apple_scab', 1: 'Apple_Blackrot', 2: 'AppleCedar_applerust', 3: 'Applehealthy', 4: 'Blueberryhealthy', 5: 'Cherry(includingsour)Powderymildew', 6: 'Cherry(including_sour)healthy', 7: 'Corn(maize)_Cercospora_leaf_spot Gray_leafspot', 8: 'Corn(maize)_Commonrust', 9: 'Corn(maize)_Northern_LeafBlight', 10: 'Corn(maize)healthy', 11: 'GrapeBlack_rot', 12: 'GrapeEsca(BlackMeasles)', 13: 'GrapeLeafblight(Isariopsis_Leaf_Spot)', 14: 'Grapehealthy', 15: 'OrangeHaunglongbing_(Citrus_greening)', 16: 'Peach_Bacterial_spot', 17: 'Peach_healthy', 18: 'Pepper,bellBacterial_spot', 19: 'Pepper,bellhealthy', 20: 'Potato_Early_blight', 21: 'Potato_Lateblight', 22: 'Potatohealthy', 23: 'Raspberry_healthy', 24: 'Soybeanhealthy', 25: 'SquashPowdery_mildew', 26: 'Strawberry_Leaf_scorch', 27: 'Strawberry__healthy', 28: 'TomatoBacterial_spot', 29: 'Tomato_Early_blight', 30: 'Tomato_Lateblight', 31: 'TomatoLeaf_Mold', 32: 'Tomato___Septoria_leafspot', 33: 'TomatoSpider_mites Two-spotted_spider_mite', 34: 'Tomato_Target_Spot', 35: 'Tomato___Tomato_Yellow_Leaf_CurlVirus', 36: 'TomatoTomato_mosaic_virus', 37: 'Tomato___healthy' }; `