Closed ghost closed 3 years ago
[ { "faceId": "611374b3-8153-409e-9721-1069fd922f35", "faceRectangle": { "top": 367, "left": 236, "width": 410, "height": 410 }, "faceAttributes": { "facialHair": { "moustache": 0.4, "beard": 0.4, "sideburns": 0.1 }, "emotion": { "anger": 0.0, "contempt": 0.0, "disgust": 0.0, "fear": 0.0, "happiness": 1.0, "neutral": 0.0, "sadness": 0.0, "surprise": 0.0 } } } ]
Week 2 Step 2 ⬤⬤◯◯◯◯◯◯ | 🕐 Estimated completion: 5-20 minutes
Create an Azure function that takes in an image from a HTTP request, parses it with parse-multipart, and returns the base64 code of the image
emotionalgifs
emotionalgifs
with the HTTP Trigger template, install and use parse-multipart
to parse an image from an HTTP request, and save its raw data in a variableEMOTIONAL_ENDPOINT
and commit your code to emotionalgifs/index.js
on your emotionalgifs
branchemotionalgifs
to main
, but do not merge itUse Postman! Paste the function url and make a POST request. Remember to attach the file in Body
! In the output box, you should get the output. Make sure you're using an image with a real face on it or else it won't work. Here's an example of an output I get with this image:
emotionalgifs
HTTP trigger function that parses imagesA HTTP multipart request is a HTTP request that HTTP clients construct to send files and data over to a HTTP Server.
💡 This is commonly used by browsers and HTTP clients to upload files to the server.
Because we want to send an image file through an HTTP Request, we need a piece of software to parse the raw data to extract the image file. Here comes the handy NPM package: parse-multipart
!
The lines above represents a raw multipart/form-data payload sent by some HTTP client via form submission containing two files. We need to extract all the files contained inside it. The multipart format allows you to send more than one file in the same payload, so that's why it is called multipart.
When the POST request is made to this function with an image, we need to:
Week 2 Step 3 ⬤⬤⬤◯◯◯◯◯ | 🕐 Estimated completion: 10-30 minutes
Remember back in step 1 we used Postman to call the API? Well we are going to call the Face API with our Azure Function!
Create an Azure function that can send a static image to the Face API that returns emotion data
node-fetch
and send back the emotion data in the body in JSON format body: {the_data}
EMOTIONAL_ENDPOINT
and commit your code to emotionalgifs/index.js
on the emotionalgifs
branchUse Postman! Paste the function url and make a POST request. Remember to attach the file in Body
! In the output box, you should get the output. Make sure you're using an image with a real face on it or else it won't work. Here's an example of an output I get with this image:
This function takes in one parameter, img
, that contains the image we're trying to analyze. Inside, we have two variables involved in the call: subscriptionKey
and uriBase
.
The process.env
object allows you to access super-secret values in your backend. This prevents hackers from getting your keys and doing bad stuff (like exploiting your resources).
async function analyzeImage(img){
const subscriptionKey = process.env.SUBSCRIPTIONKEY;
const uriBase = process.env.ENDPOINT + '/face/v1.0/detect';
}
Follow this Documentation to save your keys in your Function App.
Create a new function analyzeImage(img)
outside of module.exports
that will handle analyzing the image. Keep in mind this function is async because we will be using the await keyword with the API call.
Get your Face API keys ready! We need to let our Face API know that we are authenticated to access this resource.
In the same way we installed parse-multipart
, install node-fetch
.
Read the API section of the documentation. We're going to make a call using the fetch(url, {options})
function.
To call the fetch
function - use the await keyword, which we need because fetch
returns a Promise, which is a proxy for a value that isn't currently known. You can read about Javascript promises here.
API Documentation can be tricky sometimes...Here's something to help
:bulb: Request Headers tell the receiving end of the request what type of data is in the body.
Call the analyzeImage
function in module.exports
. Add the code below into module.exports
.
Remember that parts
represents the parsed multipart form data. It is an array
of parts, each one described by a filename, a type and a data. Since we only sent one file, it is stored in index 0, and we want the data
property to access the binary file– hence parts[0].data
. Then in the HTTP response of our Azure function, we store the result of the API call.
//module.exports function
//analyze the image
var result = await analyzeImage(parts[0].data);
context.res = {
body: {
result
}
};
console.log(result)
context.done();
Pat yourself on the back.
Week 2 Step 1 ⬤◯◯◯◯◯◯◯ | 🕐 Estimated completion: 5-20 minutes
Getting Emotional ~ With the Face API
✅ Task:
Create a request in Postman to send an image of a person to the Azure Face API to return the subject's emotions
API_ENDPOINT
ANDSUBSCRIPTION_KEY
🚧 Test your Work
You should get the following expected output if you have configured your Face API correcty, as well as sent the request with the correct parameters and body.
:white_check_mark: Expected Output
```json { "result": [ { "faceId": "d25465d6-0c38-4417-8466-cabdd908e756", "faceRectangle": { "top": 313, "left": 210, "width": 594, "height": 594 }, "faceAttributes": { "emotion": { "anger": 0, "contempt": 0, "disgust": 0, "fear": 0, "happiness": 1, "neutral": 0, "sadness": 0, "surprise": 0 } } } ] } ```1. The Face API
The Face API will accept the image and return information about the face, specifically emotions. Watch this video on Microsoft Cognitive Services for an in-depth explanation: http://www.youtube.com/watch?v=2aA8OEZ1wk8
❓ How do I create and access the Face API?
1. Log into your Azure portal 2. Navigate to **Create a Resource**, the **AI + Machine Learning** tab on the left, and finally select **Face** and fill out the necessary information 3. Record and save the API endpoint and [subscription key](https://docs.microsoft.com/en-us/azure/api-management/api-management-subscriptions) 4. Place the API endpoint and subscrition key in the GitHub repository secrets: `API_ENDPOINT` AND `SUBSCRIPTION_KEY` * These keys will be used in the Azure function to give access to this API❓ Where can I find the Face API keys?
1. Navigate to the home page on the Micrsoft Azure portal (https://portal.azure.com/#home) 2. Click on the resource you need the keys for 3. On the left menu bar, locate the Resource Management section and click on "Keys and Endpoint"2. Using Postman to Send a Request
Now, we can test if our API is working using Postman. Make sure to pay close attention to the documentation and the API Reference
Request URL
Request URL is used when a web client makes a request to a server for a resource. Notice that the request url listed in the API reference is this:
https://{endpoint}/face/v1.0/detect[?returnFaceId]\[&returnFaceLandmarks]\[&returnFaceAttributes]\[&recognitionModel]\[&returnRecognitionModel][&detectionModel]
Parameters
Parameters are typically used in requests to APIs to specify settings or customize what YOU want to receive.
❓ What are the parameters for the request?
The Request URL has the following parameters in [ ]: * [?returnFaceId] * [&returnFaceLandmarks] * [&returnFaceAttributes] * [&recognitionModel] * [&returnRecognitionModel] * [&detectionModel] Important things to note: - All of the bracketed sections represent possible request parameters - Read through **Request Parameters** section carefully - How can we specify that we want to get the emotion data? - All of the parameters are **optional** - We can delete the parameters we don't need in our request - Your **request URL** only requres one parameter, with a specific value - Between `detect` and your parameter, add `?` - *If you had more than one parameter,* you would need to place `&` between each (but not between `detect` and your first parameter) - Since we only have one parameter, no `&` are needed :bulb: [**All of this is located in the documentation! Find this section to read more:**](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) ![image](https://user-images.githubusercontent.com/69332964/119398425-8613c380-bca5-11eb-9cb3-575b6b0e3ee7.png)Request Headers
Request Headers tell the receiving end of the request what type of data is in the body.
❓ How do I specify Request Headers?
- Go back to the Face API documentation [here](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), and find the **Request headers** section. - The `Content-Type` header should be set to`application/octet-stream`. This specifies a binary file. - The `Ocp-Apim-Subscription-Key` header should be set to one of your two keys from your Face API resource. - Request headers are **not** part of the request URL. They are specified in the Postman headers tab:Request Body
The body of a POST request contains the data you are sending.
❓ How do I send the image in the body of the POST request?
Go to the **body** tab of your Postman request and select **binary**: Next, just upload the [image](https://images.unsplash.com/photo-1507003211169-0a1dd7228f2d?ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&ixlib=rb-1.2.1&auto=format&fit=crop&w=934&q=80) and send your POST request.