Closed hamzaali360 closed 3 years ago
Week 2 Step 4 ⬤⬤⬤⬤◯◯◯◯ | 🕐 Estimated completion: 5-10 minutes
Modify your Azure Function so that it returns the Dominant Emotion of an Image.
emotionalgifs
, determine the dominant emotion in the emotion data and output the dominant emotion in the request body when you make a POST
requestemotionalgifs/index.js
on the emotionalgifs
branchIn order to match the results of the Face API with Gifs from the Giphy API, we need to determine the dominant emotion from the API response.
We need to access the emotion data by itself, without the face id and other analyzed data. To do this, we need to create another variable in the first async function in our Azure Function:
let emotions = result[0].faceAttributes.emotion;
:bulb: Now you've got the JSON object with all the emotion values, find the highest valued emotion! Use context.log(emotions)
to see how it's structured.
We're accessing the data at index 0 because we're analyzing one face. If there were two faces, the data for the second face would be stored at index 1.
Week 2 Step 5 ⬤⬤⬤⬤⬤◯◯◯ | 🕐 Estimated completion: 20-30 minutes
Call the GIPHY API with the dominant emotion of a picture
emotionalgifs
, call the GIPHY API in a GET
request with the dominant emotionemotionalgifs/index.js
on the emotionalgifs
branchCreate a POST
request in Postman. Use the function URL as the request URL, and send an image in the body of the request:
We're going to connect the your first Azure function, emotionalgifs
, with the GIPHY API.
Set up an account by clicking here and enter an email address, username, and password.
According to the documentation, an API key is a required parameters in a call to GIPHY's translate endpoint. The link (for gifs) that is listed in the documentation is the endpoint we will be using in this project.
We will be calling the GIPHY API in the same function that analyzes inputted images.
Create another async function in emotionalgifs
called findGifs
. It needs a parameter through which we can pass the dominant emotion of an image. Call this parameter emotion
.
:bulb: Use the documentation to create a request to the Giphy API in the function.
Now that you have a async function that can can the Giphy API with an emotion and return a gif link, you need to incorporate it in the main module.exports
function.
TIP: Use
await
to receive a response from the function since it is async!❗ How should I call the findGifs function we wrote?
Let's call findGifs
in the first async function in emotionalgifs
. Currently, our first async function looks like this:
module.exports = async function (context, req) {
context.log('JavaScript HTTP trigger function processed a request.');
var boundary = multipart.getBoundary(req.headers['content-type']);
var body = req.body;
var parts = multipart.Parse(body, boundary);
var result = await analyzeImage(parts[0].data);
let emotions = result[0].faceAttributes.emotion;
let objects = Object.values(emotions);
const main_emotion = Object.keys(emotions).find(key => emotions[key] === Math.max(...objects));
context.res = {
// status: 200, /* Defaults to 200 */
body: main_emotion
};
console.log(result)
context.done();
}
We need to declare another variable, gif
. It needs to store the link returned when our new async function, findGifs
, is called. Also, the dominant emotion from our analyzed image needs to be passed through the emotion
parameter.
var gif = await //WHAT GOES HERE?
Finally, we need our new variable gif
to be the output of emotionalgifs
rather than main_emotion
:
context.res = {
// status: 200, /* Defaults to 200 */
body: //WHAT GOES HERE?
};
Go ahead and merge this branch to main
to move on. Great work finishing this section!
⚠️ If you receive a
Conflicts
error, simply pressResolve conflicts
and you should be good to merge!
Week 2 Step 3 ⬤⬤⬤◯◯◯◯◯ | 🕐 Estimated completion: 10-30 minutes
Getting Emotional ~ With the Face API
✅ Task:
Create an Azure function that can send a static image to the Face API that returns emotion data
node-fetch
and send back the emotion data in the body in JSON formatbody: {the_data}
EMOTIONAL_ENDPOINT
and commit your code toemotionalgifs/index.js
on theemotionalgifs
branch.🚧 Test your Work
Use Postman! Paste the function url and make a POST request. Remember to attach the file in
Body
! In the output box, you should get the output. Make sure you're using an image with a real face on it or else it won't work. Here's an example of an output I get with this image::white_check_mark: Expected Output
The output should be in JSON format like so: ```json { "result": [ { "faceId": "a16f522d-0577-4e50-97cb-1feeef7aaf2e", "faceRectangle": { "top": 313, "left": 210, "width": 594, "height": 594 }, "faceAttributes": { "emotion": { "anger": 0, "contempt": 0, "disgust": 0, "fear": 0, "happiness": 1, "neutral": 0, "sadness": 0, "surprise": 0 } } } ] } ```1. Define keys to authenticate the request
This function takes in one parameter,
img
, that contains the image we're trying to analyze. Inside, we have two variables involved in the call:subscriptionKey
anduriBase
.The
process.env
object allows you to access super-secret values in your backend. This prevents hackers from getting your keys and doing bad stuff (like exploiting your resources).Follow this Documentation to save your keys in your Function App's application settings.
💡 When you put the key and endpoint in Azure's application settings, it will only work after it is deployed and running with the Azure function url. In order to use postman and test this on your computer before deploying to Azure, you will need to code the
subscriptionKey
anduriBase
like this:💡 Afterwards, when commiting the code to github for the counselorbot to check, you will need to replace the hardcoded
subscriptionKey
anduriBase
variables withprocess.env.SUBSCRIPTIONKEY
andprocess.env.ENDPOINT
❓ Why do we need to do this?
When running your program on your own computer, the program has no way of accessing Microsoft Azure's function application settings, so `process.env` will not work. However, once you deploy the function onto Azure, the function now can "see" the application settings, and can use them through `process.env`2: Call the FACE API
Create a new function
analyzeImage(img)
outside ofmodule.exports
that will handle analyzing the image. Keep in mind this function is async because we will be using the await keyword with the API call.❓ What is an async function?
Javascript is what we call a "synchronous" language, meaning operations in Javascript block other operations from executing until they are complete, creating a sense of single directional flow. **This means that only one operation can happen at a time.** However, in order to maximize efficiency (save time and resources), Javascript allows the use of asynchronous functions. Async functions allow other operations to continue running as they are being executed. Refer to [this blog](https://dev.to/hardy613/asynchronous-vs-synchronous-programming-23ed) for more information. Promises are sychnronous objects, similar to their real life meaning, return a value at some point in the future, or a reason for why that value could not be returned - they represent the result of an async function that may or may not be resolved. > [Is JavaScript Synchronous or Asynchronous? What the Heck is a Promise?](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) > [Master the JavaScript Interview: What is a Promise?](https://medium.com/better-programming/is-javascript-synchronous-or-asynchronous-what-the-hell-is-a-promise-7aa9dd8f3bfb)❗️ Specify the parameters for your request!
In order to specify all of our parameters easily, we're going to create a new `URLSearchParams` object. Here's the object declared for you. I've also already specified one parameter, `returnFaceId`, as `true` to provide an example. Add in a new parameter that requests emotion. Remember, these parameters are coming from the [Face API documentation](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236)! ```js let params = new URLSearchParams({ 'returnFaceId': 'true', ':hammer_and_wrench: Calling the API
In the same way we installed
parse-multipart
, installnode-fetch
. Read the API section of the documentation. We're going to make a call using thefetch(url, {options})
function.To call the
fetch
function - use the await keyword, which we need becausefetch
returns a Promise, which is a proxy for a value that isn't currently known. You can read about Javascript promises here.:bulb: Request Headers tell the receiving end of the request what type of data is in the body.
❓ How do I specify Request Headers?
Go back to the Face API documentation [here](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), and find the **Request headers** section. There are two headers that you need. I've provided the format below. Enter in the two header names and their two corresponding values. FYI: The `Content-Type` header should be set to`'application/octet-stream'`. This specifies a binary file. ```js //COMPLETE THE CODE let resp = await fetch(uriBase + '?' + params.toString(), { method: '❓ What is the URL?
Notice that the URL is just the uriBase with the params we specified earlier appended on. `const uriBase = '❓ Still Confused?
Fill in the `method` and `body`. ```js async function analyzeImage(img){ const subscriptionKey = '⏬ Receiving Data
Call the
analyzeImage
function inmodule.exports
. Add the code below intomodule.exports
.Remember that
parts
represents the parsed multipart form data. It is anarray
of parts, each one described by a filename, a type and a data. Since we only sent one file, it is stored in index 0, and we want thedata
property to access the binary file– henceparts[0].data
. Then in the HTTP response of our Azure function, we store the result of the API call.