Remember back in step 1 we used Postman to call the API? Well we are going to call the Face API with our Azure Function!
✅ Task:
Create an Azure function that can send a static image to the Face API that returns emotion data
[x] 1: Add Face API Keys and endpoint into your Function App's application settings.
[ ] 2: Edit your function to make a request to the Face API using node-fetch and send back the emotion data in the body in JSON formatbody: {the_data}
[ ] 3: Place your function URL in a repository secret called EMOTIONAL_ENDPOINT and commit your code to emotionalgifs/index.js on the emotionalgifs branch.
[ ] 4: Create a pull request to merge emotionalgifs onto main, and only merge the pull request when the bot approves your changes!
🚧 Test your Work
Use Postman! Paste the function url and make a POST request. Remember to attach the file in Body! In the output box, you should get the output. Make sure you're using an image with a real face on it or else it won't work. Here's an example of an output I get with this image:
:white_check_mark: Expected Output
The output should be in JSON format like so:
```json
{
"result": [
{
"faceId": "a16f522d-0577-4e50-97cb-1feeef7aaf2e",
"faceRectangle": {
"top": 313,
"left": 210,
"width": 594,
"height": 594
},
"faceAttributes": {
"emotion": {
"anger": 0,
"contempt": 0,
"disgust": 0,
"fear": 0,
"happiness": 1,
"neutral": 0,
"sadness": 0,
"surprise": 0
}
}
}
]
}
```
1. Define keys to authenticate the request
This function takes in one parameter, img, that contains the image we're trying to analyze. Inside, we have two variables involved in the call: subscriptionKey and uriBase.
The process.env object allows you to access super-secret values in your backend. This prevents hackers from getting your keys and doing bad stuff (like exploiting your resources).
💡 When you put the key and endpoint in Azure's application settings, it will only work after it is deployed and running with the Azure function url. In order to use postman and test this on your computer before deploying to Azure, you will need to code the subscriptionKey and uriBase like this:
💡 Afterwards, when commiting the code to github for the counselorbot to check, you will need to replace the hardcoded subscriptionKey and uriBase variables with process.env.SUBSCRIPTIONKEY and process.env.ENDPOINT
❓ Why do we need to do this?
When running your program on your own computer, the program has no way of accessing Microsoft Azure's function application settings, so `process.env` will not work. However, once you deploy the function onto Azure, the function now can "see" the application settings, and can use them through `process.env`
2: Call the FACE API
Create a new function analyzeImage(img) outside of module.exports that will handle analyzing the image. Keep in mind this function is async because we will be using the await keyword with the API call.
Get your Face API keys ready! We need to let our Face API know that we are authenticated to access this resource.
❓ What is an async function?
Javascript is what we call a "synchronous" language, meaning operations in Javascript block other operations from executing until they are complete, creating a sense of single directional flow. **This means that only one operation can happen at a time.** However, in order to maximize efficiency (save time and resources), Javascript allows the use of asynchronous functions.
Async functions allow other operations to continue running as they are being executed. Refer to [this blog](https://dev.to/hardy613/asynchronous-vs-synchronous-programming-23ed) for more information.
Promises are sychnronous objects, similar to their real life meaning, return a value at some point in the future, or a reason for why that value could not be returned - they represent the result of an async function that may or may not be resolved.
> [Is JavaScript Synchronous or Asynchronous? What the Heck is a Promise?](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise)
> [Master the JavaScript Interview: What is a Promise?](https://medium.com/better-programming/is-javascript-synchronous-or-asynchronous-what-the-hell-is-a-promise-7aa9dd8f3bfb)
❓ How do you specify the parameters for your request!
In order to specify all of our parameters easily, we're going to create a new `URLSearchParams` object. Here's the object declared for you. I've also already specified one parameter, `returnFaceId`, as `true` to provide an example. Add in a new parameter that requests emotion.
Remember, these parameters are coming from the [Face API documentation](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236)!
```js
let params = new URLSearchParams({
'returnFaceId': 'true',
'': '' //FILL IN THIS LINE
})
```
:hammer_and_wrench: Calling the API
In the same way we installed parse-multipart, install node-fetch.
Read the API section of the documentation. We're going to make a call using the fetch(url, {options}) function.
To call the fetch function - use the await keyword, which we need because fetch returns a Promise, which is a proxy for a value that isn't currently known. You can read about Javascript promises here.
API Documentation can be tricky sometimes...Here's something to help
:bulb: Request Headers tell the receiving end of the request what type of data is in the body.
❓ How do I specify Request Headers?
Go back to the Face API documentation [here](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), and find the **Request headers** section.
There are two headers that you need. I've provided the format below. Enter in the two header names and their two corresponding values.
FYI: The `Content-Type` header should be set to`'application/octet-stream'`. This specifies a binary file.
```js
//COMPLETE THE CODE
let resp = await fetch(uriBase + '?' + params.toString(), {
method: '', //WHAT TYPE OF REQUEST?
body: '', //WHAT ARE WE SENDING TO THE API?
//ADD YOUR TWO HEADERS HERE
headers: {
'': ''
}
})
```
❓ What is the URL?
Notice that the URL is just the uriBase with the params we specified earlier appended on.
`const uriBase = '' + '/face/v1.0/detect';`
❓ Still Confused?
Fill in the `method` and `body`.
```js
async function analyzeImage(img){
const subscriptionKey = '';
const uriBase = '' + '/face/v1.0/detect';
let params = new URLSearchParams({
'returnFaceId': 'true',
'returnFaceAttributes': 'emotion'
})
//COMPLETE THE CODE
let resp = await fetch(uriBase + '?' + params.toString(), {
method: '', //WHAT TYPE OF REQUEST?
body: '', //WHAT ARE WE SENDING TO THE API?
headers: {
'': '' //do this in the next section
}
})
let data = await resp.json();
return data;
}
```
⏬ Receiving Data
Call the analyzeImage function in module.exports. Add the code below into module.exports.
Remember that parts represents the parsed multipart form data. It is an array of parts, each one described by a filename, a type and a data. Since we only sent one file, it is stored in index 0, and we want the data property to access the binary file– hence parts[0].data. Then in the HTTP response of our Azure function, we store the result of the API call.
//module.exports function
//analyze the image
const result = await analyzeImage(parts[0].data);
context.res = {
body: {
result
}
};
console.log(result)
context.done();
Week 2 Step 3 ⬤⬤⬤◯◯◯◯◯ | 🕐 Estimated completion: 10-30 minutes
Getting Emotional ~ With the Face API
✅ Task:
Create an Azure function that can send a static image to the Face API that returns emotion data
node-fetch
and send back the emotion data in the body in JSON formatbody: {the_data}
EMOTIONAL_ENDPOINT
and commit your code toemotionalgifs/index.js
on theemotionalgifs
branch.emotionalgifs
ontomain
, and only merge the pull request when the bot approves your changes!🚧 Test your Work
Use Postman! Paste the function url and make a POST request. Remember to attach the file in
Body
! In the output box, you should get the output. Make sure you're using an image with a real face on it or else it won't work. Here's an example of an output I get with this image::white_check_mark: Expected Output
The output should be in JSON format like so: ```json { "result": [ { "faceId": "a16f522d-0577-4e50-97cb-1feeef7aaf2e", "faceRectangle": { "top": 313, "left": 210, "width": 594, "height": 594 }, "faceAttributes": { "emotion": { "anger": 0, "contempt": 0, "disgust": 0, "fear": 0, "happiness": 1, "neutral": 0, "sadness": 0, "surprise": 0 } } } ] } ```1. Define keys to authenticate the request
This function takes in one parameter,
img
, that contains the image we're trying to analyze. Inside, we have two variables involved in the call:subscriptionKey
anduriBase
.The
process.env
object allows you to access super-secret values in your backend. This prevents hackers from getting your keys and doing bad stuff (like exploiting your resources).Follow this Documentation to save your keys in your Function App's application settings.
💡 When you put the key and endpoint in Azure's application settings, it will only work after it is deployed and running with the Azure function url. In order to use postman and test this on your computer before deploying to Azure, you will need to code the
subscriptionKey
anduriBase
like this:💡 Afterwards, when commiting the code to github for the counselorbot to check, you will need to replace the hardcoded
subscriptionKey
anduriBase
variables withprocess.env.SUBSCRIPTIONKEY
andprocess.env.ENDPOINT
❓ Why do we need to do this?
When running your program on your own computer, the program has no way of accessing Microsoft Azure's function application settings, so `process.env` will not work. However, once you deploy the function onto Azure, the function now can "see" the application settings, and can use them through `process.env`2: Call the FACE API
Create a new function
analyzeImage(img)
outside ofmodule.exports
that will handle analyzing the image. Keep in mind this function is async because we will be using the await keyword with the API call.❓ What is an async function?
Javascript is what we call a "synchronous" language, meaning operations in Javascript block other operations from executing until they are complete, creating a sense of single directional flow. **This means that only one operation can happen at a time.** However, in order to maximize efficiency (save time and resources), Javascript allows the use of asynchronous functions. Async functions allow other operations to continue running as they are being executed. Refer to [this blog](https://dev.to/hardy613/asynchronous-vs-synchronous-programming-23ed) for more information. Promises are sychnronous objects, similar to their real life meaning, return a value at some point in the future, or a reason for why that value could not be returned - they represent the result of an async function that may or may not be resolved. > [Is JavaScript Synchronous or Asynchronous? What the Heck is a Promise?](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) > [Master the JavaScript Interview: What is a Promise?](https://medium.com/better-programming/is-javascript-synchronous-or-asynchronous-what-the-hell-is-a-promise-7aa9dd8f3bfb)❓ How do you specify the parameters for your request!
In order to specify all of our parameters easily, we're going to create a new `URLSearchParams` object. Here's the object declared for you. I've also already specified one parameter, `returnFaceId`, as `true` to provide an example. Add in a new parameter that requests emotion. Remember, these parameters are coming from the [Face API documentation](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236)! ```js let params = new URLSearchParams({ 'returnFaceId': 'true', ':hammer_and_wrench: Calling the API
In the same way we installed
parse-multipart
, installnode-fetch
. Read the API section of the documentation. We're going to make a call using thefetch(url, {options})
function.To call the
fetch
function - use the await keyword, which we need becausefetch
returns a Promise, which is a proxy for a value that isn't currently known. You can read about Javascript promises here.:bulb: Request Headers tell the receiving end of the request what type of data is in the body.
❓ How do I specify Request Headers?
Go back to the Face API documentation [here](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), and find the **Request headers** section. There are two headers that you need. I've provided the format below. Enter in the two header names and their two corresponding values. FYI: The `Content-Type` header should be set to`'application/octet-stream'`. This specifies a binary file. ```js //COMPLETE THE CODE let resp = await fetch(uriBase + '?' + params.toString(), { method: '❓ What is the URL?
Notice that the URL is just the uriBase with the params we specified earlier appended on. `const uriBase = '❓ Still Confused?
Fill in the `method` and `body`. ```js async function analyzeImage(img){ const subscriptionKey = '⏬ Receiving Data
Call the
analyzeImage
function inmodule.exports
. Add the code below intomodule.exports
.Remember that
parts
represents the parsed multipart form data. It is anarray
of parts, each one described by a filename, a type and a data. Since we only sent one file, it is stored in index 0, and we want thedata
property to access the binary file– henceparts[0].data
. Then in the HTTP response of our Azure function, we store the result of the API call.📹 Walkthrough Video