tensorflow / tfjs

A WebGL accelerated JavaScript library for training and deploying ML models.
https://js.tensorflow.org
Apache License 2.0
18.27k stars 1.92k forks source link

Not able to Load Local Models of TensorFlow React Native Below is the code attached #3 #3070

Closed alihassan711 closed 4 years ago

alihassan711 commented 4 years ago

I followed this Tutorial and official documentation... It was amazing and helpful and works perfectly with Tensor Flow models, but when I try to load local model in componentDidMount it says this ERROR 15:43 Unable to resolve "../assets/model.json" from "App.js" ERROR 15:43 Building JavaScript bundle: error

Even if I change the path like this

const modelJson = require('./assets/model.json');
const modelWeights = require('./assets/det.bin');

It again throws error with the updated path.

import React from 'react'
import {
  StyleSheet,
  Text,
  View,
  ActivityIndicator,
  StatusBar,
  Image,
  TouchableOpacity
} from 'react-native'
import * as tf from '@tensorflow/tfjs'
import { fetch } from '@tensorflow/tfjs-react-native'
import * as mobilenet from '@tensorflow-models/mobilenet'
import * as jpeg from 'jpeg-js'
import * as ImagePicker from 'expo-image-picker'
import Constants from 'expo-constants'
import * as Permissions from 'expo-permissions'

class App extends React.Component {
  state = {
    isTfReady: false,
    isModelReady: false,
    predictions: null,
    image: null
  }

  async componentDidMount() {
    await tf.ready()
    this.setState({
      isTfReady: true
    })

    // Get reference to bundled model assets
    const modelJson = require('../assets/model.json');
    const modelWeights = require('../assets/det.bin');
    this.model = await tf.loadLayersModel(
      bundleResourceIO(modelJson, modelWeights));

    //this.model = await mobilenet.load()
    this.setState({ isModelReady: true })
    this.getPermissionAsync()
  }

  getPermissionAsync = async () => {
    if (Constants.platform.ios) {
      const { status } = await Permissions.askAsync(Permissions.CAMERA_ROLL)
      if (status !== 'granted') {
        alert('Sorry, we need camera roll permissions to make this work!')
      }
    }
  }

  imageToTensor(rawImageData) {
    const TO_UINT8ARRAY = true
    const { width, height, data } = jpeg.decode(rawImageData, TO_UINT8ARRAY)
    // Drop the alpha channel info for mobilenet
    const buffer = new Uint8Array(width * height * 3)
    let offset = 0 // offset into original data
    for (let i = 0; i < buffer.length; i += 3) {
      buffer[i] = data[offset]
      buffer[i + 1] = data[offset + 1]
      buffer[i + 2] = data[offset + 2]

      offset += 4
    }

    return tf.tensor3d(buffer, [height, width, 3])
  }

  classifyImage = async () => {
    try {
      const imageAssetPath = Image.resolveAssetSource(this.state.image)
      const response = await fetch(imageAssetPath.uri, {}, { isBinary: true })
      const rawImageData = await response.arrayBuffer()
      const imageTensor = this.imageToTensor(rawImageData)
      const predictions = await this.model.classify(imageTensor)
      this.setState({ predictions })
      console.log(predictions)
    } catch (error) {
      console.log(error)
    }
  }

  selectImage = async () => {
    try {
      let response = await ImagePicker.launchImageLibraryAsync({
        mediaTypes: ImagePicker.MediaTypeOptions.All,
        allowsEditing: true,
        aspect: [4, 3]
      })

      if (!response.cancelled) {
        const source = { uri: response.uri }
        this.setState({ image: source })
        this.classifyImage()
      }
    } catch (error) {
      console.log(error)
    }
  }

  renderPrediction = prediction => {
    return (
      <Text key={prediction.className} style={styles.text}>
        {prediction.className}
      </Text>
    )
  }

  render() {
    const { isTfReady, isModelReady, predictions, image } = this.state

    return (
      <View style={styles.container}>
        <StatusBar barStyle='light-content' />
        <View style={styles.loadingContainer}>
          <Text style={styles.text}>
            TFJS ready? {isTfReady ? <Text>✅</Text> : ''}
          </Text>

          <View style={styles.loadingModelContainer}>
            <Text style={styles.text}>Model ready? </Text>
            {isModelReady ? (
              <Text style={styles.text}>✅</Text>
            ) : (
              <ActivityIndicator size='small' />
            )}
          </View>
        </View>
        <TouchableOpacity
          style={styles.imageWrapper}
          onPress={isModelReady ? this.selectImage : undefined}>
          {image && <Image source={image} style={styles.imageContainer} />}

          {isModelReady && !image && (
            <Text style={styles.transparentText}>Tap to choose image</Text>
          )}
        </TouchableOpacity>
        <View style={styles.predictionWrapper}>
          {isModelReady && image && (
            <Text style={styles.text}>
              Predictions: {predictions ? '' : 'Predicting...'}
            </Text>
          )}
          {isModelReady &&
            predictions &&
            predictions.map(p => this.renderPrediction(p))}
        </View>
        <View style={styles.footer}>
          <Text style={styles.poweredBy}>Powered by:</Text>
          <Image source={require('./assets/tfjs.jpg')} style={styles.tfLogo} />
        </View>
      </View>
    )
  }
}

const styles = StyleSheet.create({
  container: {
    flex: 1,
    backgroundColor: '#171f24',
    alignItems: 'center'
  },
  loadingContainer: {
    marginTop: 80,
    justifyContent: 'center'
  },
  text: {
    color: '#ffffff',
    fontSize: 16
  },
  loadingModelContainer: {
    flexDirection: 'row',
    marginTop: 10
  },
  imageWrapper: {
    width: 280,
    height: 280,
    padding: 10,
    borderColor: '#cf667f',
    borderWidth: 5,
    borderStyle: 'dashed',
    marginTop: 40,
    marginBottom: 10,
    position: 'relative',
    justifyContent: 'center',
    alignItems: 'center'
  },
  imageContainer: {
    width: 250,
    height: 250,
    position: 'absolute',
    top: 10,
    left: 10,
    bottom: 10,
    right: 10
  },
  predictionWrapper: {
    height: 100,
    width: '100%',
    flexDirection: 'column',
    alignItems: 'center'
  },
  transparentText: {
    color: '#ffffff',
    opacity: 0.7
  },
  footer: {
    marginTop: 40
  },
  poweredBy: {
    fontSize: 20,
    color: '#e69e34',
    marginBottom: 6
  },
  tfLogo: {
    width: 125,
    height: 70
  }
})

export default App
tafsiri commented 4 years ago

Where in your project folder are those two files located? What is in your metro.config.js file? Are you able to load any JSON file even if you do not load any of the tensorflow libraries?

Your issue may be a more general react native one than a tensorflow.js specific one. If you make a separate app without tensorflow.js can you load local json files?

alihassan711 commented 4 years ago
Screen Shot 2020-04-13 at 6 18 55 PM

Its Expo Project so there is no Metro Config Both files are in Assets Folders. I have not tried to load local JSON in any other project.

tafsiri commented 4 years ago

I don't think you can load local assets in a hosted expo project. I would check the expo docs to see if there is anyway to load local assets (independent of tfjs). Else you might need to eject your app.

alihassan711 commented 4 years ago

To me, it looks like the problem is with the bin file reading. I have tried to use expo assets as well.

const modelJson = Asset.fromModule(require('./assets/model.json'));
const modelWeights = Asset.fromModule(require('./assets/det.bin'));
this.model = await tf.loadLayersModel(bundleResourceIO(modelJson, modelWeights));
tafsiri commented 4 years ago

So the json file is loading correctly? Did you configure metro to add .bin files to your assetExts?

chandan7838 commented 4 years ago

i am getting this response :=

{"dataId": {}, "dtype": "float32", "id": 17643, "isDisposedInternal": false, "kept": false, "rankType": "2", "scopeId": 9979, "shape": [1, 2], "size": 2, "strides": [2]}

after calling model.predict(imageTensor).

I ma not getting any prediction

feetbo90 commented 4 years ago

iam get error with same issue...

i'm execute this code ..

try {              
         let model = await mobilenet.load();
          const images = Asset.fromModule(require('./assets/catsmall.png')).uri;
          // Get a reference to the bundled asset and convert it to a tensor
          const imageAssetPath = Image.resolveAssetSource(image)
          const response = await fetch(images, {}, { isBinary: true })
          console.log(response)
          // let response = await fetch(imageAssetPath.uri, {}, { isBinary: true });
          let imageData = await response.arrayBuffer();
          console.log("image data : " + imageData)
          let imageTensor = decodeJpeg(imageData);
          console.log("image tensor : " + imageTensor)

          let prediction = await model.classify(imageTensor);

    } catch (e) {
      console.log("error network : " + e)

    }

const response = await fetch(images, {}, { isBinary: true })

response get return 
Response {
  "_bodyArrayBuffer": ArrayBuffer [],
  "_bodyInit": ArrayBuffer [],
  "headers": Headers {
    "map": Object {
      "cache-control": "public, max-age=0",
      "connection": "keep-alive",
      "date": "Wed, 15 Apr 2020 13:05:16 GMT",
      "transfer-encoding": "chunked",
      "x-content-type-options": "nosniff",
    },
  },
  "ok": true,
  "status": 200,
  "statusText": undefined,
  "type": "default",
  "url": "http://127.0.0.1:19001/assets/assets/catsmall.png?platform=android&hash=6adcaa9fd19ee6c3ea627d21d327768f?platform=android&dev=true&minify=false&hot=false",
}

not getting any prediction and get error like this image

tafsiri commented 4 years ago

@chandan7838 @feetbo90 please make new issues if you think you are experiencing a bug, this issue is related to loading local models. Also @chandan7838 that response is the prediction as a tensor. Please take a look the guide and tutorials to familiarize yourself with tensors and how to use them. You can also look at the mobilenet model documentation to see different ways to use the model.

Asncodes-80 commented 3 years ago

Hi there! I tried so very much, but i have "Network request failed" error when i input my photo and i want to tensorflow that process! my code is:

import React, { useState, useEffect } from 'react';
import { View, Text, Image, TouchableOpacity, StyleSheet, Dimensions, Alert, ScrollView, StatusBar } from 'react-native';
// TF
import * as tf from '@tensorflow/tfjs'
import { fetch } from '@tensorflow/tfjs-react-native'
import * as mobilenet from '@tensorflow-models/mobilenet'
import * as jpeg from 'jpeg-js'
// Lib for image picker android/ios
import * as ImagePicker from 'expo-image-picker'
import { Ionicons, AntDesign } from '@expo/vector-icons';
// Expo Permission 
import Constants from 'expo-constants'
import * as Permissions from 'expo-permissions'
const App = () => {
  // State for holding  TensorFlow is Ready:
  const [tfReady, setTFReady] = useState(false)
  // State for holding  Model is Ready:
  const [modelReady, setModelReady] = useState(false)
  // To have any predictions:
  const [allPredictions, setPredict] = useState(null)
  // Image URI for putting and computing
  // Picture for processing any issue and solve predictions:
  const [img, setImg] = useState('');

  const [likeModel, setLikeModel] = useState('');

  // Create Component Did Mount (useEffect) with Hook API new method with me
  // If tf.js is ready you will see "TF.js is ready"
  // If Model is ready you will see "Model ready"
  useEffect(()=>{
    const waitForReadyTF = async() =>{
      // Wait for reading Tensor
      await tf.ready()
      // Signal to the tfReady (State) for showing to user (TF is Ready)
      setTFReady(true)

      // Loading Model with Mobilenet
      // What is the MobileNet?
      // Mobilenet is a architecture model 
      // for mobile vision and Image Classification.
      const model = await mobilenet.load()
      setLikeModel(model)
      setModelReady(true)
      // Ask for Camera permission function
      cameraPermission()
    }
    waitForReadyTF()
  })
  // Allow user the camera permission to use our app
  const cameraPermission = async() =>{
    if(Constants.platform.android){
      const status = await Permissions.askAsync(Permissions.CAMERA_ROLL)
      if(status != 'granted'){
        Alert.alert("Permission not detections!", `
          Please allow to your Camera to work with my app!`);
      }
    }
  }
  // image to tensor function has a 1. args 
  // that import image and converting to array 
  // and ready to classify with tensorflow!
  const imageToTensor = (rawImageData) =>{
    // In this function at first 
    // we will convert image to array uin 8 
    const TO_UINT8ARRAY = true
    const { width, height, data } = jpeg.decode(rawImageData, TO_UINT8ARRAY)
    // Drop to alpha channel info for mobilenet:
    const buffer = new Uint8Array(width * height * 3)
    var offset = 0
    for (var i = 0; i < buffer.length; i+=3){
      buffer[i] = data[offset]
      buffer[i + 1] = data[offset + 1]
      buffer[i + 3] = data[offset + 2]

      offset += 4
    }
    return tf.tensor3d(buffer, [height, width, 3])
  }

  const classifyImage = async () => {
    try{
      console.log("==================Classify image section==================")
      console.log("This is image uri: ")
      console.log(img)
      console.log("======================================================")
      // Getting image that selected by user 
      // with imageAssetPath we specify this file is image and adding as assets
      const imageAssetPath = Image.resolveAssetSource(img)
      // Getting imageAssetPath as uri and converting to is Binary
      const response = await fetch(imageAssetPath.uri, {}, {isBinary: true})
      const rawImageData = await response.arrayBuffer()
      const imageTensor = imageToTensor(rawImageData)
      // Will be see an big big big Error!
      const predictions = await likeModel.classify(imageTensor)
      setPredict(predictions)
      console.log(allPredictions)
    }catch(err){
      console.log(err)
    }
  }

  // Selecting Img from Camera Roll
  const selectImage = async () => {
    try {
        const response = await ImagePicker.launchImageLibraryAsync({
          mediaTypes: ImagePicker.MediaTypeOptions.Images,
          allowsEditing: true,
          aspect: [4, 4]
        });
      if (!response.cancelled) {
        // Create JS Object to select Image from img state 
        const source = { uri: response.uri }
          setImg(source);
          // Lets goo to classify your image
          classifyImage()
      }
    } catch (err) {
      alert(err)
    }
  }
  return (
    <View style={styles.container}>
    <StatusBar barStyle="light-content"/>
     {/* If tf ready is really ready you can show on app TF status  */}
      {tfReady? 
        <Text style={styles.tfReadyStatusConnection}>TF is Ready!</Text> 
        : <Text style={styles.tfReadyStatusDontConnection}>Connecting...</Text>}
      <Text style={styles.welcomeTxt}>Classify your image</Text>

      <View style={styles.sectionB}>
        {img === '' ? <View style={styles.imgPlace}>
          <AntDesign style={{marginTop: 70}} name="picture" size={120} color="white" />
        </View> 
          : <Image
            style={styles.imgPlace}
            source={img} />}

        <TouchableOpacity
          onPress={() => {
            selectImage()
          }}
          onLongPress={()=>{
            setImg('');
          }}
          style={styles.btnItems}>
          <Ionicons
            name="md-add"
            style={styles.addButton}
            size={30}
            color="#FFF" />
        </TouchableOpacity>

        <Text>Predictions: {allPredictions}</Text>
      </View>
    </View>
  )
}
const styles = StyleSheet.create({
  container: {
    flex: 1,
    flexDirection: 'column',
    backgroundColor: 'black'
  },
  welcomeTxt: {
    fontSize: 20,
    alignSelf: 'center',
    marginTop: 80,
    marginBottom: 30,
    color: '#FFF'
  },
  sectionB: {
    alignSelf: 'center',
    alignItems: 'flex-end'
  },
  imgPlace: {
    width: Dimensions.get('window').width - 100,
    height: 300,
    borderRadius: 10,
    alignSelf: 'center',
    alignItems: 'center'
  },
  btnItems: {
    backgroundColor: '#3b416c',
    width: 50,
    height: 50,
    borderRadius: 100,
    marginTop: -25,
    marginRight: -10
  },
  addButton: {
    alignSelf: 'center',
    marginTop: 10
  },
  tfReadyStatusConnection: {
    fontSize: 10,
    alignSelf: 'center',
    textAlign: 'center',
    justifyContent: 'center',
    color: '#8f5',
    marginTop: 50,
    marginHorizontal: 50,
  },
  tfReadyStatusDontConnection: {
    fontSize: 10,
    alignSelf: 'flex-start',
    textAlign: 'left',
    justifyContent: 'center',
    color: '#d03',
    marginTop: 50
  },
});
export default App;

tap on the following link to see my app(error): https://drive.google.com/file/d/1RZiLR3KWaj2HFRY1Ss7_VopXEjH18GBb/view?usp=sharing

co2nut commented 3 years ago

i am facing the similar issue, my problem is I had multiple .bin files converted.

Does anyone know how to bundleResourceIO multiple modelweights and use loadLayersModel with multiple .bin files?

dipeshkoirala21 commented 3 years ago

Hi there! I tried so very much, but i have "Network request failed" error when i input my photo and i want to tensorflow that process! my code is:

import React, { useState, useEffect } from 'react';
import { View, Text, Image, TouchableOpacity, StyleSheet, Dimensions, Alert, ScrollView, StatusBar } from 'react-native';
// TF
import * as tf from '@tensorflow/tfjs'
import { fetch } from '@tensorflow/tfjs-react-native'
import * as mobilenet from '@tensorflow-models/mobilenet'
import * as jpeg from 'jpeg-js'
// Lib for image picker android/ios
import * as ImagePicker from 'expo-image-picker'
import { Ionicons, AntDesign } from '@expo/vector-icons';
// Expo Permission 
import Constants from 'expo-constants'
import * as Permissions from 'expo-permissions'
const App = () => {
  // State for holding  TensorFlow is Ready:
  const [tfReady, setTFReady] = useState(false)
  // State for holding  Model is Ready:
  const [modelReady, setModelReady] = useState(false)
  // To have any predictions:
  const [allPredictions, setPredict] = useState(null)
  // Image URI for putting and computing
  // Picture for processing any issue and solve predictions:
  const [img, setImg] = useState('');

  const [likeModel, setLikeModel] = useState('');

  // Create Component Did Mount (useEffect) with Hook API new method with me
  // If tf.js is ready you will see "TF.js is ready"
  // If Model is ready you will see "Model ready"
  useEffect(()=>{
    const waitForReadyTF = async() =>{
      // Wait for reading Tensor
      await tf.ready()
      // Signal to the tfReady (State) for showing to user (TF is Ready)
      setTFReady(true)

      // Loading Model with Mobilenet
      // What is the MobileNet?
      // Mobilenet is a architecture model 
      // for mobile vision and Image Classification.
      const model = await mobilenet.load()
      setLikeModel(model)
      setModelReady(true)
      // Ask for Camera permission function
      cameraPermission()
    }
    waitForReadyTF()
  })
  // Allow user the camera permission to use our app
  const cameraPermission = async() =>{
    if(Constants.platform.android){
      const status = await Permissions.askAsync(Permissions.CAMERA_ROLL)
      if(status != 'granted'){
        Alert.alert("Permission not detections!", `
          Please allow to your Camera to work with my app!`);
      }
    }
  }
  // image to tensor function has a 1. args 
  // that import image and converting to array 
  // and ready to classify with tensorflow!
  const imageToTensor = (rawImageData) =>{
    // In this function at first 
    // we will convert image to array uin 8 
    const TO_UINT8ARRAY = true
    const { width, height, data } = jpeg.decode(rawImageData, TO_UINT8ARRAY)
    // Drop to alpha channel info for mobilenet:
    const buffer = new Uint8Array(width * height * 3)
    var offset = 0
    for (var i = 0; i < buffer.length; i+=3){
      buffer[i] = data[offset]
      buffer[i + 1] = data[offset + 1]
      buffer[i + 3] = data[offset + 2]

      offset += 4
    }
    return tf.tensor3d(buffer, [height, width, 3])
  }

  const classifyImage = async () => {
    try{
      console.log("==================Classify image section==================")
      console.log("This is image uri: ")
      console.log(img)
      console.log("======================================================")
      // Getting image that selected by user 
      // with imageAssetPath we specify this file is image and adding as assets
      const imageAssetPath = Image.resolveAssetSource(img)
      // Getting imageAssetPath as uri and converting to is Binary
      const response = await fetch(imageAssetPath.uri, {}, {isBinary: true})
      const rawImageData = await response.arrayBuffer()
      const imageTensor = imageToTensor(rawImageData)
      // Will be see an big big big Error!
      const predictions = await likeModel.classify(imageTensor)
      setPredict(predictions)
      console.log(allPredictions)
    }catch(err){
      console.log(err)
    }
  }

  // Selecting Img from Camera Roll
  const selectImage = async () => {
    try {
        const response = await ImagePicker.launchImageLibraryAsync({
          mediaTypes: ImagePicker.MediaTypeOptions.Images,
          allowsEditing: true,
          aspect: [4, 4]
        });
      if (!response.cancelled) {
        // Create JS Object to select Image from img state 
        const source = { uri: response.uri }
          setImg(source);
          // Lets goo to classify your image
          classifyImage()
      }
    } catch (err) {
      alert(err)
    }
  }
  return (
    <View style={styles.container}>
    <StatusBar barStyle="light-content"/>
     {/* If tf ready is really ready you can show on app TF status  */}
      {tfReady? 
        <Text style={styles.tfReadyStatusConnection}>TF is Ready!</Text> 
        : <Text style={styles.tfReadyStatusDontConnection}>Connecting...</Text>}
      <Text style={styles.welcomeTxt}>Classify your image</Text>

      <View style={styles.sectionB}>
        {img === '' ? <View style={styles.imgPlace}>
          <AntDesign style={{marginTop: 70}} name="picture" size={120} color="white" />
        </View> 
          : <Image
            style={styles.imgPlace}
            source={img} />}

        <TouchableOpacity
          onPress={() => {
            selectImage()
          }}
          onLongPress={()=>{
            setImg('');
          }}
          style={styles.btnItems}>
          <Ionicons
            name="md-add"
            style={styles.addButton}
            size={30}
            color="#FFF" />
        </TouchableOpacity>

        <Text>Predictions: {allPredictions}</Text>
      </View>
    </View>
  )
}
const styles = StyleSheet.create({
  container: {
    flex: 1,
    flexDirection: 'column',
    backgroundColor: 'black'
  },
  welcomeTxt: {
    fontSize: 20,
    alignSelf: 'center',
    marginTop: 80,
    marginBottom: 30,
    color: '#FFF'
  },
  sectionB: {
    alignSelf: 'center',
    alignItems: 'flex-end'
  },
  imgPlace: {
    width: Dimensions.get('window').width - 100,
    height: 300,
    borderRadius: 10,
    alignSelf: 'center',
    alignItems: 'center'
  },
  btnItems: {
    backgroundColor: '#3b416c',
    width: 50,
    height: 50,
    borderRadius: 100,
    marginTop: -25,
    marginRight: -10
  },
  addButton: {
    alignSelf: 'center',
    marginTop: 10
  },
  tfReadyStatusConnection: {
    fontSize: 10,
    alignSelf: 'center',
    textAlign: 'center',
    justifyContent: 'center',
    color: '#8f5',
    marginTop: 50,
    marginHorizontal: 50,
  },
  tfReadyStatusDontConnection: {
    fontSize: 10,
    alignSelf: 'flex-start',
    textAlign: 'left',
    justifyContent: 'center',
    color: '#d03',
    marginTop: 50
  },
});
export default App;

tap on the following link to see my app(error): https://drive.google.com/file/d/1RZiLR3KWaj2HFRY1Ss7_VopXEjH18GBb/view?usp=sharing

your issue is probable cause of this https://github.com/tensorflow/tfjs/issues/3186 issue. if you still have problem after this, you can clone my working example from here https://github.com/dipeshkoirala21/rn-tensorflowjs-mobilenet

dipeshkoirala21 commented 3 years ago

@alihassan711 @tafsiri hello, I am fairly new to tensorflow, and i have some questions for you guys, Please help me here. How did you convert your pretrained models? Did you use tensorflowjs_converter? If not, what did you use? I am trying to convert pretrained model, which generally is in .pb format (downloaded from here : https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md ) and i saw many examples where model.json and model.bin files is being used. Can somebody provide me a link for the tutorial or some tips to convert models into .bin files?

benoitkoenig commented 3 years ago

@co2nut tensorflowjs_converter --input_format keras path/to/my_model.h5 path/to/my_output --weight_shard_size_bytes 419430400 The weight_shard_size_bytes means that the bin files are allowed to be heavier. By increasing that value enough, you will get a single .bin. The value I provided will be enough if you have fewer then 100 shards. That being said, having a single big shard has its own drawbacks @dipeshkoirala21 Official doc here: https://www.tensorflow.org/js/tutorials/conversion/import_keras

gazier857 commented 3 years ago

i'm having problem on this line: const modelWeights = require('./assets/model/wt.bin');

benoitkoenig commented 3 years ago

@gazier857 https://stackoverflow.com/questions/60715615/react-native-importing-error-unable-to-resolve-module-with-bin-how-to-import Please post this kind of issues on stack overflow rather than github :-)