Closed alihassan711 closed 4 years ago
Where in your project folder are those two files located? What is in your metro.config.js file? Are you able to load any JSON file even if you do not load any of the tensorflow libraries?
Your issue may be a more general react native one than a tensorflow.js specific one. If you make a separate app without tensorflow.js can you load local json files?
Its Expo Project so there is no Metro Config Both files are in Assets Folders. I have not tried to load local JSON in any other project.
I don't think you can load local assets in a hosted expo project. I would check the expo docs to see if there is anyway to load local assets (independent of tfjs). Else you might need to eject your app.
To me, it looks like the problem is with the bin file reading. I have tried to use expo assets as well.
const modelJson = Asset.fromModule(require('./assets/model.json'));
const modelWeights = Asset.fromModule(require('./assets/det.bin'));
this.model = await tf.loadLayersModel(bundleResourceIO(modelJson, modelWeights));
So the json file is loading correctly? Did you configure metro to add .bin files to your assetExts
?
i am getting this response :=
{"dataId": {}, "dtype": "float32", "id": 17643, "isDisposedInternal": false, "kept": false, "rankType": "2", "scopeId": 9979, "shape": [1, 2], "size": 2, "strides": [2]}
after calling model.predict(imageTensor).
I ma not getting any prediction
iam get error with same issue...
i'm execute this code ..
try {
let model = await mobilenet.load();
const images = Asset.fromModule(require('./assets/catsmall.png')).uri;
// Get a reference to the bundled asset and convert it to a tensor
const imageAssetPath = Image.resolveAssetSource(image)
const response = await fetch(images, {}, { isBinary: true })
console.log(response)
// let response = await fetch(imageAssetPath.uri, {}, { isBinary: true });
let imageData = await response.arrayBuffer();
console.log("image data : " + imageData)
let imageTensor = decodeJpeg(imageData);
console.log("image tensor : " + imageTensor)
let prediction = await model.classify(imageTensor);
} catch (e) {
console.log("error network : " + e)
}
const response = await fetch(images, {}, { isBinary: true })
response get return
Response {
"_bodyArrayBuffer": ArrayBuffer [],
"_bodyInit": ArrayBuffer [],
"headers": Headers {
"map": Object {
"cache-control": "public, max-age=0",
"connection": "keep-alive",
"date": "Wed, 15 Apr 2020 13:05:16 GMT",
"transfer-encoding": "chunked",
"x-content-type-options": "nosniff",
},
},
"ok": true,
"status": 200,
"statusText": undefined,
"type": "default",
"url": "http://127.0.0.1:19001/assets/assets/catsmall.png?platform=android&hash=6adcaa9fd19ee6c3ea627d21d327768f?platform=android&dev=true&minify=false&hot=false",
}
not getting any prediction and get error like this
@chandan7838 @feetbo90 please make new issues if you think you are experiencing a bug, this issue is related to loading local models. Also @chandan7838 that response is the prediction as a tensor. Please take a look the guide and tutorials to familiarize yourself with tensors and how to use them. You can also look at the mobilenet model documentation to see different ways to use the model.
Hi there! I tried so very much, but i have "Network request failed" error when i input my photo and i want to tensorflow that process! my code is:
import React, { useState, useEffect } from 'react';
import { View, Text, Image, TouchableOpacity, StyleSheet, Dimensions, Alert, ScrollView, StatusBar } from 'react-native';
// TF
import * as tf from '@tensorflow/tfjs'
import { fetch } from '@tensorflow/tfjs-react-native'
import * as mobilenet from '@tensorflow-models/mobilenet'
import * as jpeg from 'jpeg-js'
// Lib for image picker android/ios
import * as ImagePicker from 'expo-image-picker'
import { Ionicons, AntDesign } from '@expo/vector-icons';
// Expo Permission
import Constants from 'expo-constants'
import * as Permissions from 'expo-permissions'
const App = () => {
// State for holding TensorFlow is Ready:
const [tfReady, setTFReady] = useState(false)
// State for holding Model is Ready:
const [modelReady, setModelReady] = useState(false)
// To have any predictions:
const [allPredictions, setPredict] = useState(null)
// Image URI for putting and computing
// Picture for processing any issue and solve predictions:
const [img, setImg] = useState('');
const [likeModel, setLikeModel] = useState('');
// Create Component Did Mount (useEffect) with Hook API new method with me
// If tf.js is ready you will see "TF.js is ready"
// If Model is ready you will see "Model ready"
useEffect(()=>{
const waitForReadyTF = async() =>{
// Wait for reading Tensor
await tf.ready()
// Signal to the tfReady (State) for showing to user (TF is Ready)
setTFReady(true)
// Loading Model with Mobilenet
// What is the MobileNet?
// Mobilenet is a architecture model
// for mobile vision and Image Classification.
const model = await mobilenet.load()
setLikeModel(model)
setModelReady(true)
// Ask for Camera permission function
cameraPermission()
}
waitForReadyTF()
})
// Allow user the camera permission to use our app
const cameraPermission = async() =>{
if(Constants.platform.android){
const status = await Permissions.askAsync(Permissions.CAMERA_ROLL)
if(status != 'granted'){
Alert.alert("Permission not detections!", `
Please allow to your Camera to work with my app!`);
}
}
}
// image to tensor function has a 1. args
// that import image and converting to array
// and ready to classify with tensorflow!
const imageToTensor = (rawImageData) =>{
// In this function at first
// we will convert image to array uin 8
const TO_UINT8ARRAY = true
const { width, height, data } = jpeg.decode(rawImageData, TO_UINT8ARRAY)
// Drop to alpha channel info for mobilenet:
const buffer = new Uint8Array(width * height * 3)
var offset = 0
for (var i = 0; i < buffer.length; i+=3){
buffer[i] = data[offset]
buffer[i + 1] = data[offset + 1]
buffer[i + 3] = data[offset + 2]
offset += 4
}
return tf.tensor3d(buffer, [height, width, 3])
}
const classifyImage = async () => {
try{
console.log("==================Classify image section==================")
console.log("This is image uri: ")
console.log(img)
console.log("======================================================")
// Getting image that selected by user
// with imageAssetPath we specify this file is image and adding as assets
const imageAssetPath = Image.resolveAssetSource(img)
// Getting imageAssetPath as uri and converting to is Binary
const response = await fetch(imageAssetPath.uri, {}, {isBinary: true})
const rawImageData = await response.arrayBuffer()
const imageTensor = imageToTensor(rawImageData)
// Will be see an big big big Error!
const predictions = await likeModel.classify(imageTensor)
setPredict(predictions)
console.log(allPredictions)
}catch(err){
console.log(err)
}
}
// Selecting Img from Camera Roll
const selectImage = async () => {
try {
const response = await ImagePicker.launchImageLibraryAsync({
mediaTypes: ImagePicker.MediaTypeOptions.Images,
allowsEditing: true,
aspect: [4, 4]
});
if (!response.cancelled) {
// Create JS Object to select Image from img state
const source = { uri: response.uri }
setImg(source);
// Lets goo to classify your image
classifyImage()
}
} catch (err) {
alert(err)
}
}
return (
<View style={styles.container}>
<StatusBar barStyle="light-content"/>
{/* If tf ready is really ready you can show on app TF status */}
{tfReady?
<Text style={styles.tfReadyStatusConnection}>TF is Ready!</Text>
: <Text style={styles.tfReadyStatusDontConnection}>Connecting...</Text>}
<Text style={styles.welcomeTxt}>Classify your image</Text>
<View style={styles.sectionB}>
{img === '' ? <View style={styles.imgPlace}>
<AntDesign style={{marginTop: 70}} name="picture" size={120} color="white" />
</View>
: <Image
style={styles.imgPlace}
source={img} />}
<TouchableOpacity
onPress={() => {
selectImage()
}}
onLongPress={()=>{
setImg('');
}}
style={styles.btnItems}>
<Ionicons
name="md-add"
style={styles.addButton}
size={30}
color="#FFF" />
</TouchableOpacity>
<Text>Predictions: {allPredictions}</Text>
</View>
</View>
)
}
const styles = StyleSheet.create({
container: {
flex: 1,
flexDirection: 'column',
backgroundColor: 'black'
},
welcomeTxt: {
fontSize: 20,
alignSelf: 'center',
marginTop: 80,
marginBottom: 30,
color: '#FFF'
},
sectionB: {
alignSelf: 'center',
alignItems: 'flex-end'
},
imgPlace: {
width: Dimensions.get('window').width - 100,
height: 300,
borderRadius: 10,
alignSelf: 'center',
alignItems: 'center'
},
btnItems: {
backgroundColor: '#3b416c',
width: 50,
height: 50,
borderRadius: 100,
marginTop: -25,
marginRight: -10
},
addButton: {
alignSelf: 'center',
marginTop: 10
},
tfReadyStatusConnection: {
fontSize: 10,
alignSelf: 'center',
textAlign: 'center',
justifyContent: 'center',
color: '#8f5',
marginTop: 50,
marginHorizontal: 50,
},
tfReadyStatusDontConnection: {
fontSize: 10,
alignSelf: 'flex-start',
textAlign: 'left',
justifyContent: 'center',
color: '#d03',
marginTop: 50
},
});
export default App;
tap on the following link to see my app(error): https://drive.google.com/file/d/1RZiLR3KWaj2HFRY1Ss7_VopXEjH18GBb/view?usp=sharing
i am facing the similar issue, my problem is I had multiple .bin files converted.
Does anyone know how to bundleResourceIO multiple modelweights and use loadLayersModel with multiple .bin files?
Hi there! I tried so very much, but i have "Network request failed" error when i input my photo and i want to tensorflow that process! my code is:
import React, { useState, useEffect } from 'react'; import { View, Text, Image, TouchableOpacity, StyleSheet, Dimensions, Alert, ScrollView, StatusBar } from 'react-native'; // TF import * as tf from '@tensorflow/tfjs' import { fetch } from '@tensorflow/tfjs-react-native' import * as mobilenet from '@tensorflow-models/mobilenet' import * as jpeg from 'jpeg-js' // Lib for image picker android/ios import * as ImagePicker from 'expo-image-picker' import { Ionicons, AntDesign } from '@expo/vector-icons'; // Expo Permission import Constants from 'expo-constants' import * as Permissions from 'expo-permissions' const App = () => { // State for holding TensorFlow is Ready: const [tfReady, setTFReady] = useState(false) // State for holding Model is Ready: const [modelReady, setModelReady] = useState(false) // To have any predictions: const [allPredictions, setPredict] = useState(null) // Image URI for putting and computing // Picture for processing any issue and solve predictions: const [img, setImg] = useState(''); const [likeModel, setLikeModel] = useState(''); // Create Component Did Mount (useEffect) with Hook API new method with me // If tf.js is ready you will see "TF.js is ready" // If Model is ready you will see "Model ready" useEffect(()=>{ const waitForReadyTF = async() =>{ // Wait for reading Tensor await tf.ready() // Signal to the tfReady (State) for showing to user (TF is Ready) setTFReady(true) // Loading Model with Mobilenet // What is the MobileNet? // Mobilenet is a architecture model // for mobile vision and Image Classification. const model = await mobilenet.load() setLikeModel(model) setModelReady(true) // Ask for Camera permission function cameraPermission() } waitForReadyTF() }) // Allow user the camera permission to use our app const cameraPermission = async() =>{ if(Constants.platform.android){ const status = await Permissions.askAsync(Permissions.CAMERA_ROLL) if(status != 'granted'){ Alert.alert("Permission not detections!", ` Please allow to your Camera to work with my app!`); } } } // image to tensor function has a 1. args // that import image and converting to array // and ready to classify with tensorflow! const imageToTensor = (rawImageData) =>{ // In this function at first // we will convert image to array uin 8 const TO_UINT8ARRAY = true const { width, height, data } = jpeg.decode(rawImageData, TO_UINT8ARRAY) // Drop to alpha channel info for mobilenet: const buffer = new Uint8Array(width * height * 3) var offset = 0 for (var i = 0; i < buffer.length; i+=3){ buffer[i] = data[offset] buffer[i + 1] = data[offset + 1] buffer[i + 3] = data[offset + 2] offset += 4 } return tf.tensor3d(buffer, [height, width, 3]) } const classifyImage = async () => { try{ console.log("==================Classify image section==================") console.log("This is image uri: ") console.log(img) console.log("======================================================") // Getting image that selected by user // with imageAssetPath we specify this file is image and adding as assets const imageAssetPath = Image.resolveAssetSource(img) // Getting imageAssetPath as uri and converting to is Binary const response = await fetch(imageAssetPath.uri, {}, {isBinary: true}) const rawImageData = await response.arrayBuffer() const imageTensor = imageToTensor(rawImageData) // Will be see an big big big Error! const predictions = await likeModel.classify(imageTensor) setPredict(predictions) console.log(allPredictions) }catch(err){ console.log(err) } } // Selecting Img from Camera Roll const selectImage = async () => { try { const response = await ImagePicker.launchImageLibraryAsync({ mediaTypes: ImagePicker.MediaTypeOptions.Images, allowsEditing: true, aspect: [4, 4] }); if (!response.cancelled) { // Create JS Object to select Image from img state const source = { uri: response.uri } setImg(source); // Lets goo to classify your image classifyImage() } } catch (err) { alert(err) } } return ( <View style={styles.container}> <StatusBar barStyle="light-content"/> {/* If tf ready is really ready you can show on app TF status */} {tfReady? <Text style={styles.tfReadyStatusConnection}>TF is Ready!</Text> : <Text style={styles.tfReadyStatusDontConnection}>Connecting...</Text>} <Text style={styles.welcomeTxt}>Classify your image</Text> <View style={styles.sectionB}> {img === '' ? <View style={styles.imgPlace}> <AntDesign style={{marginTop: 70}} name="picture" size={120} color="white" /> </View> : <Image style={styles.imgPlace} source={img} />} <TouchableOpacity onPress={() => { selectImage() }} onLongPress={()=>{ setImg(''); }} style={styles.btnItems}> <Ionicons name="md-add" style={styles.addButton} size={30} color="#FFF" /> </TouchableOpacity> <Text>Predictions: {allPredictions}</Text> </View> </View> ) } const styles = StyleSheet.create({ container: { flex: 1, flexDirection: 'column', backgroundColor: 'black' }, welcomeTxt: { fontSize: 20, alignSelf: 'center', marginTop: 80, marginBottom: 30, color: '#FFF' }, sectionB: { alignSelf: 'center', alignItems: 'flex-end' }, imgPlace: { width: Dimensions.get('window').width - 100, height: 300, borderRadius: 10, alignSelf: 'center', alignItems: 'center' }, btnItems: { backgroundColor: '#3b416c', width: 50, height: 50, borderRadius: 100, marginTop: -25, marginRight: -10 }, addButton: { alignSelf: 'center', marginTop: 10 }, tfReadyStatusConnection: { fontSize: 10, alignSelf: 'center', textAlign: 'center', justifyContent: 'center', color: '#8f5', marginTop: 50, marginHorizontal: 50, }, tfReadyStatusDontConnection: { fontSize: 10, alignSelf: 'flex-start', textAlign: 'left', justifyContent: 'center', color: '#d03', marginTop: 50 }, }); export default App;
tap on the following link to see my app(error): https://drive.google.com/file/d/1RZiLR3KWaj2HFRY1Ss7_VopXEjH18GBb/view?usp=sharing
your issue is probable cause of this https://github.com/tensorflow/tfjs/issues/3186 issue. if you still have problem after this, you can clone my working example from here https://github.com/dipeshkoirala21/rn-tensorflowjs-mobilenet
@alihassan711 @tafsiri hello, I am fairly new to tensorflow, and i have some questions for you guys, Please help me here. How did you convert your pretrained models? Did you use tensorflowjs_converter? If not, what did you use? I am trying to convert pretrained model, which generally is in .pb format (downloaded from here : https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md ) and i saw many examples where model.json and model.bin files is being used. Can somebody provide me a link for the tutorial or some tips to convert models into .bin files?
@co2nut tensorflowjs_converter --input_format keras path/to/my_model.h5 path/to/my_output --weight_shard_size_bytes 419430400
The weight_shard_size_bytes means that the bin files are allowed to be heavier. By increasing that value enough, you will get a single .bin. The value I provided will be enough if you have fewer then 100 shards. That being said, having a single big shard has its own drawbacks
@dipeshkoirala21 Official doc here: https://www.tensorflow.org/js/tutorials/conversion/import_keras
i'm having problem on this line: const modelWeights = require('./assets/model/wt.bin');
@gazier857 https://stackoverflow.com/questions/60715615/react-native-importing-error-unable-to-resolve-module-with-bin-how-to-import Please post this kind of issues on stack overflow rather than github :-)
I followed this Tutorial and official documentation... It was amazing and helpful and works perfectly with Tensor Flow models, but when I try to load local model in componentDidMount it says this ERROR 15:43 Unable to resolve "../assets/model.json" from "App.js" ERROR 15:43 Building JavaScript bundle: error
Even if I change the path like this
It again throws error with the updated path.