shreythecray / serverless-demo

2 stars 0 forks source link

Emotionalgifs #11

Closed shreythecray closed 3 years ago

ghost commented 3 years ago

Week 2 Step 3 ⬤⬤⬤◯◯◯◯◯ | 🕐 Estimated completion: 10-30 minutes

Getting Emotional ~ With the Face API

Remember back in step 1 we used Postman to call the API? Well we are going to call the Face API with our Azure Function!

✅ Task:

Create an Azure function that can send a static image to the Face API that returns emotion data

🚧 Test your Work

Use Postman! Paste the function url and make a POST request. Remember to attach the file in Body! In the output box, you should get the output. Make sure you're using an image with a real face on it or else it won't work. Here's an example of an output I get with this image:

:white_check_mark: Expected Output The output should be in JSON format like so: ```json { "result": [ { "faceId": "a16f522d-0577-4e50-97cb-1feeef7aaf2e", "faceRectangle": { "top": 313, "left": 210, "width": 594, "height": 594 }, "faceAttributes": { "emotion": { "anger": 0, "contempt": 0, "disgust": 0, "fear": 0, "happiness": 1, "neutral": 0, "sadness": 0, "surprise": 0 } } } ] } ```

1. Define keys to authenticate the request

This function takes in one parameter, img, that contains the image we're trying to analyze. Inside, we have two variables involved in the call: subscriptionKey and uriBase.

The process.env object allows you to access super-secret values in your backend. This prevents hackers from getting your keys and doing bad stuff (like exploiting your resources).

async function analyzeImage(img){
    const subscriptionKey = process.env.SUBSCRIPTIONKEY;
    const uriBase = process.env.ENDPOINT + '/face/v1.0/detect';
}

Follow this Documentation to save your keys in your Function App.

2: Call the FACE API

Create a new function analyzeImage(img) outside of module.exports that will handle analyzing the image. Keep in mind this function is async because we will be using the await keyword with the API call.

Get your Face API keys ready! We need to let our Face API know that we are authenticated to access this resource.

❓ What is an async function?
Javascript is what we call a "synchronous" language, meaning operations in Javascript block other operations from executing until they are complete, creating a sense of single directional flow. **This means that only one operation can happen at a time.** However, in order to maximize efficiency (save time and resources), Javascript allows the use of asynchronous functions. Async functions allow other operations to continue running as they are being executed. Refer to [this blog](https://dev.to/hardy613/asynchronous-vs-synchronous-programming-23ed) for more information. Promises are sychnronous objects, similar to their real life meaning, return a value at some point in the future, or a reason for why that value could not be returned - they represent the result of an async function that may or may not be resolved. > [Is JavaScript Synchronous or Asynchronous? What the Heck is a Promise?](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) > [Master the JavaScript Interview: What is a Promise?](https://medium.com/better-programming/is-javascript-synchronous-or-asynchronous-what-the-hell-is-a-promise-7aa9dd8f3bfb)
❗️ Specify the parameters for your request!
In order to specify all of our parameters easily, we're going to create a new `URLSearchParams` object. Here's the object declared for you. I've also already specified one parameter, `returnFaceId`, as `true` to provide an example. Add in a new parameter that requests emotion. Remember, these parameters are coming from the [Face API documentation](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236)! Screen Shot 2021-05-16 at 9 20 19 PM ```js let params = new URLSearchParams({ 'returnFaceId': 'true', '': '' //FILL IN THIS LINE }) ```

:hammer_and_wrench: Calling the API

In the same way we installed parse-multipart, install node-fetch. Read the API section of the documentation. We're going to make a call using the fetch(url, {options}) function.

To call the fetch function - use the await keyword, which we need because fetch returns a Promise, which is a proxy for a value that isn't currently known. You can read about Javascript promises here.

API Documentation can be tricky sometimes...Here's something to help

:bulb: Request Headers tell the receiving end of the request what type of data is in the body.

❓ How do I specify Request Headers?
Go back to the Face API documentation [here](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), and find the **Request headers** section. There are two headers that you need. I've provided the format below. Enter in the two header names and their two corresponding values. FYI: The `Content-Type` header should be set to`'application/octet-stream'`. This specifies a binary file. ```js //COMPLETE THE CODE let resp = await fetch(uriBase + '?' + params.toString(), { method: '', //WHAT TYPE OF REQUEST? body: '', //WHAT ARE WE SENDING TO THE API? //ADD YOUR TWO HEADERS HERE headers: { '
': '
' } }) ```
❓ What is the URL?
Notice that the URL is just the uriBase with the params we specified earlier appended on. `const uriBase = '' + '/face/v1.0/detect';`
❓ Still Confused? Fill in the `method` and `body`. ```js async function analyzeImage(img){ const subscriptionKey = ''; const uriBase = '' + '/face/v1.0/detect'; let params = new URLSearchParams({ 'returnFaceId': 'true', 'returnFaceAttributes': 'emotion' }) //COMPLETE THE CODE let resp = await fetch(uriBase + '?' + params.toString(), { method: '', //WHAT TYPE OF REQUEST? body: '', //WHAT ARE WE SENDING TO THE API? headers: { '
': '
' //do this in the next section } }) let data = await resp.json(); return data; } ```

⏬ Receiving Data

Call the analyzeImage function in module.exports. Add the code below into module.exports.

Remember that parts represents the parsed multipart form data. It is an array of parts, each one described by a filename, a type and a data. Since we only sent one file, it is stored in index 0, and we want the data property to access the binary file– hence parts[0].data. Then in the HTTP response of our Azure function, we store the result of the API call.

//module.exports function
//analyze the image
var result = await analyzeImage(parts[0].data);
context.res = {
    body: {
        result
    }
};
console.log(result)
context.done(); 
ghost commented 3 years ago

Week 2 Step 4 ⬤⬤⬤⬤◯◯◯◯ | 🕐 Estimated completion: 5-10 minutes

Getting Emotional ~ Returning the Dominant Emotion

✅ Task:

Modify your Azure Function so that it returns the Dominant Emotion of an Image.

1. Finding the Dominant Emotion

In order to match the results of the Face API with Gifs from the Giphy API, we need to determine the dominant emotion from the API response.

:hammer_and_wrench: Modifying the Azure Function

We need to access the emotion data by itself, without the face id and other analyzed data. To do this, we need to create another variable in the first async function in our Azure Function:

let emotions = result[0].faceAttributes.emotion;

:bulb: Now you've got the JSON object with all the emotion values, find the highest valued emotion! Use context.log(emotions) to see how it's structured.

We're accessing the data at index 0 because we're analyzing one face. If there were two faces, the data for the second face would be stored at index 1.

❓ How do I find the max value from the JSON object?
1️⃣ We need to create **an array** with the emotion values (ranging from 0 to 1) so that we can manipulate it and find the dominant emotion. `Object.values()` converts an object into an array, with each **value** in the object stored as a separate element: ```js let objects = Object.values(WHAT_IS_YOUR_JSON); // FILL IT IN // What your array could look like: [0.01, 0.34, .....] ``` > :bulb: In JSON, the **key** values are what you use to access the **value**. `{key: value}`, or in our case, `{emotion: value}`. Finally, we need to find the dominant emotion in the array `objects`: 2️⃣ Let's break this line down. ```js const main_emotion = Object.keys(emotions).find(key => emotions[key] === Math.max(...objects)); ``` - `Math.max(...objects)` finds the max value. Let's say it's `0.99`. - `Object.keys(emotions).find(key => emotions[key] === Math.max(...objects));` finds the emotion, or key, that matches the max value of `0.99`. Let's say it's `happiness`. 3️⃣ Now, `main_emotion` contains the dominant emotion! All we need to do is output `main_emotion` when the function is called: ```js context.res = { // status: 200, /* Defaults to 200 */ body: main_emotion }; ```
ghost commented 3 years ago

Week 2 Step 5 ⬤⬤⬤⬤⬤◯◯◯ | 🕐 Estimated completion: 20-30 minutes

Getting Emotional ~ Calling the Giphy API

✅ Task:

Call the GIPHY API with the dominant emotion of a picture

:construction: Test your work

Create a POST request in Postman. Use the function URL as the request URL, and send an image in the body of the request:

Screen Shot 2021-05-30 at 3 07 21 PM
:white_check_mark: Expected Output
The link outputted by the function should look something like this:

`https://giphy.com/gifs/happy-spongebob-squarepants-happiness-brHaCdJqCXijm`

1. Using the GIPHY API

We're going to connect the your first Azure function, emotionalgifs, with the GIPHY API.

Creating a GIPHY Account

Set up an account by clicking here and enter an email address, username, and password.

Generating an API Key

According to the documentation, an API key is a required parameters in a call to GIPHY's translate endpoint. The link (for gifs) that is listed in the documentation is the endpoint we will be using in this project.

:question: How do I create an API key?
To create an **API key** click [here](https://developers.giphy.com/dashboard/) and click Create an App.
Screen Shot 2021-04-15 at 5 55 16 PM Select API, **not** SDK!
Screen Shot 2021-04-15 at 5 55 32 PM Then, enter the required information.
Screen Shot 2021-04-15 at 5 55 41 PM Click **Create App**, and your key should be given. Next, store your API key in your Azure Function's environment secrets.

2. Modifying the Azure Function

We will be calling the GIPHY API in the same function that analyzes inputted images.

Create another async function in emotionalgifs called findGifs. It needs a parameter through which we can pass the dominant emotion of an image. Call this parameter emotion.

:bulb: Use the documentation to create a request to the Giphy API in the function.

:question: How do you call the GIPHY API?
We're going to call the GIPHY API inside our new async function using `fetch`. Use the **translate endpoint** from the [documentation](https://developers.giphy.com/docs/api/endpoint#translate). HINT: we want the dominant emotion from the image to be the **search term**, and we only want **1 gif** to be returned. ```js //COMPLETE THE CODE const apiResult = await fetch ("https://api.giphy.com/v1/gifs/translate?//WHAT GOES HERE?"); ``` > **Hint**: If you read the documentation correctly, you should see that you need to use your API key in the request. Remember to access your environment secrets, you can use `process.env['the secret name']` We need to convert the content of `apiResult` into **JSON format**. Remember, we're using the `await` keyword because `fetch` (which we used to call the GIPHY API) returns a Promise, and a Promise is a **proxy** for a value that isn't currently known. ```js const jsonResult = await //WHAT GOES HERE?.json(); ``` Finally, we need to return a specific link from the JSON file stored in `jsonResult`: ```js return //WHAT GOES HERE.data.url; ```

Now that you have a async function that can can the Giphy API with an emotion and return a gif link, you need to incorporate it in the main module.exports function.

TIP: Use await to receive a response from the function since it is async!

❗ How should I call the findGifs function we wrote?

Let's call findGifs in the first async function in emotionalgifs. Currently, our first async function looks like this:

module.exports = async function (context, req) {
    context.log('JavaScript HTTP trigger function processed a request.');

    var boundary = multipart.getBoundary(req.headers['content-type']);
    var body = req.body;
    var parts = multipart.Parse(body, boundary);

    var result = await analyzeImage(parts[0].data);

    let emotions = result[0].faceAttributes.emotion;
    let objects = Object.values(emotions);
    const main_emotion = Object.keys(emotions).find(key => emotions[key] === Math.max(...objects));

    context.res = {
        // status: 200, /* Defaults to 200 */
        body: main_emotion
    };
    console.log(result)
    context.done();
}

We need to declare another variable, gif. It needs to store the link returned when our new async function, findGifs, is called. Also, the dominant emotion from our analyzed image needs to be passed through the emotion parameter.

var gif = await //WHAT GOES HERE?

Finally, we need our new variable gif to be the output of emotionalgifs rather than main_emotion:

context.res = {
    // status: 200, /* Defaults to 200 */
    body: //WHAT GOES HERE?
};



ghost commented 3 years ago

⏰ Time to merge!

Go ahead and merge this branch to main to move on. Great work finishing this section!

merge

⚠️ If you receive a Conflicts error, simply press Resolve conflicts and you should be good to merge!