aws-amplify / aws-sdk-ios

AWS SDK for iOS. For more information, see our web site:
https://aws-amplify.github.io/docs
Other
1.68k stars 886 forks source link

How to do eyeglasses detect for an image in iOS #4005

Closed daver234 closed 2 years ago

daver234 commented 2 years ago

State your question Do you have an example of how to do eyeglasses detection (get the confidence level) for an image in iOS using Swift?

It seems that I have to pass in an image to detectFaces. Then in the result, if I ask for all the attributes, I can get a confidence level for glasses on the detected face. Can I get a sample of how to do this?

Which AWS Services are you utilizing? iOS SDK using C++ with AWS Rekognition

Provide code snippets (if applicable) `mutating func sendImageToRekognition(glassesImageData: Data) { rekognitionObject = AWSRekognition.default() let faceToDetect = AWSRekognitionImage() faceToDetect?.bytes = glassesImageData

let detectFaceAWS = AWSRekognitionDetectFacesRequest()
detectFaceAWS?.image = faceToDetect

rekognitionObject.detectFaces(detectFaceAWS!) { (result, error) in
  if error != nil {
    print(error!)
    return
  }
  guard let resultSafe = result else {
    return
  }
  resultSafe.faceDetails?.forEach {
    print("## in foreach loop; how do I get the eye glasses confidence level?")
    print($0)
  }
}

}` Environment(please complete the following information):

Device Information (please complete the following information):

If you need help with understanding how to implement something in particular then we suggest that you first look into our developer guide. You can also simplify your process of creating an application, as well as the associated backend setup by using the Amplify CLI.

thisisabhash commented 2 years ago

Hello,

You can use Amplify Predictions API to use Machine Learning in your application. For reference : https://docs.amplify.aws/lib/predictions/getting-started/q/platform/ios/

Specifically, you can detect objects in an image, alongwith the confidence of the labels detected. and https://docs.amplify.aws/lib/predictions/label-image/q/platform/ios/#set-up-your-backend

func detectLabels(_ image:URL) {
    // For offline calls only to Core ML models replace `options` in the call below with this instance:
    // let options = PredictionsIdentifyRequest.Options(defaultNetworkPolicy: .offline, pluginOptions: nil)
    Amplify.Predictions.identify(type: .detectLabels(.labels), image: image) { event in
        switch event {
        case let .success(result):
            let data = result as! IdentifyLabelsResult
            for label in data.labels {
                print("Label name: \(label.name) Confidence: \(String(describing: label.metadata?.confidence))")
            }
            // print(data.labels)
            // Use the labels in your app as you like or display them
        case let .failure(error):
            print(error)
        }
    }
}

I tried it with a sample image URL with eyeglasses and the results looked like below in the console logs : Screen Shot 2022-03-04 at 1 57 53 PM