AlexisRodriguezCS / Serverless

GNU General Public License v3.0
0 stars 0 forks source link

Getting Emotional with APIs #10

Open ghost opened 3 years ago

ghost commented 3 years ago

Week 2 Step 1 ⬤◯◯◯◯◯◯◯ | 🕐 Estimated completion: 5-20 minutes

Getting Emotional ~ With the Face API

Because of amazing APIs, you don't need to be an expert in machine learning and AI to take advantage of cutting edge technology. In this project, we are going to be building an API and webpage to return you a GIF when you upload a picture of yourself!

✅ Task:

Create a request in Postman to send an image of a person to the Azure Face API to return the subject's emotions

🚧 Test your Work

You should get the following expected output if you have configured your Face API correcty, as well as sent the request with the correct parameters and body.

:white_check_mark: Expected Output ```json { "result": [ { "faceId": "d25465d6-0c38-4417-8466-cabdd908e756", "faceRectangle": { "top": 313, "left": 210, "width": 594, "height": 594 }, "faceAttributes": { "emotion": { "anger": 0, "contempt": 0, "disgust": 0, "fear": 0, "happiness": 1, "neutral": 0, "sadness": 0, "surprise": 0 } } } ] } ```

1. The Face API

The Face API will accept the image and return information about the face, specifically emotions. Watch this video on Microsoft Cognitive Services for an in-depth explanation: http://www.youtube.com/watch?v=2aA8OEZ1wk8

❓ How do I create and access the Face API?
1. Log into your Azure portal 2. Navigate to **Create a Resource**, the **AI + Machine Learning** tab on the left, and finally select **Face** and fill out the necessary information 3. Record and save the API endpoint and [subscription key](https://docs.microsoft.com/en-us/azure/api-management/api-management-subscriptions) 4. Place the API endpoint and subscrition key in the GitHub repository secrets: `API_ENDPOINT` AND `SUBSCRIPTION_KEY` * These keys will be used in the Azure function to give access to this API
❓ Where can I find the Face API keys?
1. Navigate to the home page on the Micrsoft Azure portal (https://portal.azure.com/#home) Screen Shot 2021-02-04 at 4 00 33 PM 2. Click on the resource you need the keys for Screen Shot 2021-02-04 at 4 00 49 PM 3. On the left menu bar, locate the Resource Management section and click on "Keys and Endpoint" Screen Shot 2021-02-04 at 12 26 36 PM

2. Using Postman to Send a Request

Now, we can test if our API is working using Postman. Make sure to pay close attention to the documentation and the API Reference

Request URL

Request URL is used when a web client makes a request to a server for a resource. Notice that the request url listed in the API reference is this:

https://{endpoint}/face/v1.0/detect[?returnFaceId]\[&returnFaceLandmarks]\[&returnFaceAttributes]\[&recognitionModel]\[&returnRecognitionModel][&detectionModel]

Parameters

Parameters are typically used in requests to APIs to specify settings or customize what YOU want to receive.

❓ What are the parameters for the request?
The Request URL has the following parameters in [ ]: * [?returnFaceId] * [&returnFaceLandmarks] * [&returnFaceAttributes] * [&recognitionModel] * [&returnRecognitionModel] * [&detectionModel] Important things to note: - All of the bracketed sections represent possible request parameters - Read through **Request Parameters** section carefully - How can we specify that we want to get the emotion data? - All of the parameters are **optional** - We can delete the parameters we don't need in our request - Your **request URL** only requres one parameter, with a specific value - Between `detect` and your parameter, add `?` - *If you had more than one parameter,* you would need to place `&` between each (but not between `detect` and your first parameter) - Since we only have one parameter, no `&` are needed :bulb: [**All of this is located in the documentation! Find this section to read more:**](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) ![image](https://user-images.githubusercontent.com/69332964/119398425-8613c380-bca5-11eb-9cb3-575b6b0e3ee7.png)

Request Headers

Request Headers tell the receiving end of the request what type of data is in the body.

❓ How do I specify Request Headers? - Go back to the Face API documentation [here](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), and find the **Request headers** section. - The `Content-Type` header should be set to`application/octet-stream`. This specifies a binary file. - The `Ocp-Apim-Subscription-Key` header should be set to one of your two keys from your Face API resource. - Request headers are **not** part of the request URL. They are specified in the Postman headers tab: Screen Shot 2021-05-27 at 6 33 07 PM

Request Body

The body of a POST request contains the data you are sending.

❓ How do I send the image in the body of the POST request?
To send a post request, click on the dropdown and select `POST`. This means that we are going to send data to the server. Prior to this, we have been getting data from the server with a `GET` request. Screen Shot 2021-06-25 at 7 25 58 PM Go to the **body** tab of your Postman request and select **binary**: Screen Shot 2021-05-27 at 6 37 53 PM Next, just upload the [image](https://images.unsplash.com/photo-1507003211169-0a1dd7228f2d?ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&ixlib=rb-1.2.1&auto=format&fit=crop&w=934&q=80) and send your POST request.
AlexisRodriguezCS commented 3 years ago
    {
        "faceId": "7234282d-7c76-4866-86dc-fb74eb9f629d",
        "faceRectangle": {
            "top": 367,
            "left": 235,
            "width": 413,
            "height": 413
        },
        "faceAttributes": {
            "emotion": {
                "anger": 0.0,
                "contempt": 0.0,
                "disgust": 0.0,
                "fear": 0.0,
                "happiness": 1.0,
                "neutral": 0.0,
                "sadness": 0.0,
                "surprise": 0.0
            }
        }
    }
]
ghost commented 3 years ago

Week 2 Step 2 ⬤⬤◯◯◯◯◯◯ | 🕐 Estimated completion: 5-20 minutes

Getting Emotional ~ With Parse-Multipart

Create an Azure function that takes in an image from a HTTP request, parses it with parse-multipart, and returns the base64 code of the image

✅ Task:

🚧 Test your Work

Use Postman! Paste the function url and make a POST request. Remember to attach the file in Body! In the output box, you should get the output. Make sure you're using an image with a real face on it or else it won't work. Here's an example of an output I get with this image:

:white_check_mark: Expected Output The output should be the base64 code of the inputted image, like this: ```base64 /9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQIC... ```
:question: Confused about Postman? 1. Navigate back to the Postman app and change GET to POST 2. Publish/deploy your function and copy your function url from the VS Code output like this: 3. Use the function url and any image you want to send the POST request. Remember to attach the file in `Body`, and send it using `form-data`! 💡 Keep in mind - Note that when adding a file to `form-data`, you do NOT need to specify a key, and can send only a value (which in our case is a file). - To change the value to a file, hover over the value box, click on the `Text` dropdown and select `File`. ![Untitled_ Nov 11, 2020 6_40 PM](https://user-images.githubusercontent.com/69332964/98876997-780afd80-244d-11eb-87fc-13822d909f2f.gif)

1. Create emotionalgifs HTTP trigger function that parses images

What is multipart request?

A HTTP multipart request is a HTTP request that HTTP clients construct to send files and data over to a HTTP Server.

💡 This is commonly used by browsers and HTTP clients to upload files to the server.

Because we want to send an image file through an HTTP Request, we need a piece of software to parse the raw data to extract the image file. Here comes the handy NPM package: parse-multipart!

The raw payload formatted as multipart/form-data will looks like this (expand) ``` ------WebKitFormBoundaryDtbT5UpPj83kllfw Content-Disposition: form-data; name="uploads[]"; filename="somebinary.dat" Content-Type: application/octet-stream some binary data...maybe the bits of a image.. ------WebKitFormBoundaryDtbT5UpPj83kllfw Content-Disposition: form-data; name="uploads[]"; filename="sometext.txt" Content-Type: text/plain hello how are you ------WebKitFormBoundaryDtbT5UpPj83kllfw-- ```

The lines above represents a raw multipart/form-data payload sent by some HTTP client via form submission containing two files. We need to extract all the files contained inside it. The multipart format allows you to send more than one file in the same payload, so that's why it is called multipart.

:package: Installing `parse-multipart` Before you can install `parse-multipart`, you need to enter `npm init -y` into the terminal. This command allows us to set up a new npm package:

Screen Shot 2021-05-30 at 7 11 28 PM [Open up a terminal in VSCode](https://code.visualstudio.com/docs/editor/integrated-terminal) inside your function's directory, type `npm install parse-multipart`, and press enter. > :bulb: Forgot how to navigate a terminal? [Check this out.](https://computers.tutsplus.com/tutorials/navigating-the-terminal-a-gentle-introduction--mac-3855) **Note:** the text outputted by the console does not mean there was an error! The npm package has successfully been installed.
How do I use this package?
First, we need to declare the variable `multipart` outside of the async function so that we can access the NPM package: ```js var multipart = require('parse-multipart'); ``` Notice that `multipart.Parse(body, boundary)` requires two arguments, as it has two parameters. I've already gotten the boundary for you – just like the documentation example, our boundary is a string in the format `"----WebKitFormBoundary(random characters here)"`. In the `multipart.Parse()` call, you need to figure out what the body parameter should be. > :bulb: **Hint:** It should be the request body. Think about the template HTTP Trigger Azure function. How did we access the body in there? ```js // here's your boundary: var boundary = multipart.getBoundary(req.headers['content-type']); // TODO: assign the body variable the correct value var body = '' // parse the body var parts = multipart.Parse(body, boundary); ```

2. 🖼️ Receiving the image

When the POST request is made to this function with an image, we need to:

  1. Get the image from the parts (parse-multipart) variable
  2. Convert the image to base64
  3. Store the converted image in the response body
:question: What parts of the template code do we need? Take a look at the standard `module.exports` function code: ```js module.exports = async function (context, req) { // the code } ``` This is the function that runs **every time your HTTP trigger gets a request**. As a parameter of this function, the `req` parameter contains all the information the request was sent with. *This contains*: * Headers * The body Remove all of the content in `module.exports` except this: ```js context.res = { // status: 200, /* Defaults to 200 */ body: //LEAVE THIS BLANK }; ```

:question: How do we output the image in base64? Next, we want to output the **base64** code of the inputted image. The parsed image data that we need to convert to base64 is is stored in index 0 of `parts` since we only sent one file, and we want the data property of this image to access the binary file. Thus, we will be converting `parts[0].data` to base64 and assigning the code to a new variable: ```javascript var convertedResult = Buffer.from(parts[0].data).toString('_____'); // FILL IN THE BLANK ``` The `Buffer` part of the code provides **temporary storage** for the binary image data as it is converted to **base64**. Now, complete the following so that the **base64** code is outputted when the function is called: ```js context.res = { // status: 200, /* Defaults to 200 */ body: //WHAT GOES HERE? }; ```

ghost commented 3 years ago

Week 2 Step 3 ⬤⬤⬤◯◯◯◯◯ | 🕐 Estimated completion: 10-30 minutes

Getting Emotional ~ With the Face API

Remember back in step 1 we used Postman to call the API? Well we are going to call the Face API with our Azure Function!

✅ Task:

Create an Azure function that can send a static image to the Face API that returns emotion data

🚧 Test your Work

Use Postman! Paste the function url and make a POST request. Remember to attach the file in Body! In the output box, you should get the output. Make sure you're using an image with a real face on it or else it won't work. Here's an example of an output I get with this image:

:white_check_mark: Expected Output The output should be in JSON format like so: ```json { "result": [ { "faceId": "a16f522d-0577-4e50-97cb-1feeef7aaf2e", "faceRectangle": { "top": 313, "left": 210, "width": 594, "height": 594 }, "faceAttributes": { "emotion": { "anger": 0, "contempt": 0, "disgust": 0, "fear": 0, "happiness": 1, "neutral": 0, "sadness": 0, "surprise": 0 } } } ] } ```

1. Define keys to authenticate the request

This function takes in one parameter, img, that contains the image we're trying to analyze. Inside, we have two variables involved in the call: subscriptionKey and uriBase.

The process.env object allows you to access super-secret values in your backend. This prevents hackers from getting your keys and doing bad stuff (like exploiting your resources).

async function analyzeImage(img){
    const subscriptionKey = process.env.SUBSCRIPTIONKEY;
    const uriBase = process.env.ENDPOINT + '/face/v1.0/detect';
}

Follow this Documentation to save your keys in your Function App's application settings.

💡 When you put the key and endpoint in Azure's application settings, it will only work after it is deployed and running with the Azure function url. In order to use postman and test this on your computer before deploying to Azure, you will need to code the subscriptionKey and uriBase like this:

const subscriptionKey = "YOUR_SUBSCRIPTIONKEY"
const uriBase = "YOUR_LOCALHOST_URL"

💡 Afterwards, when commiting the code to github for the counselorbot to check, you will need to replace the hardcoded subscriptionKey and uriBase variables with process.env.SUBSCRIPTIONKEY and process.env.ENDPOINT

❓ Why do we need to do this?
When running your program on your own computer, the program has no way of accessing Microsoft Azure's function application settings, so `process.env` will not work. However, once you deploy the function onto Azure, the function now can "see" the application settings, and can use them through `process.env`

2: Call the FACE API

Create a new function analyzeImage(img) outside of module.exports that will handle analyzing the image. Keep in mind this function is async because we will be using the await keyword with the API call.

Get your Face API keys ready! We need to let our Face API know that we are authenticated to access this resource.

❓ What is an async function?
Javascript is what we call a "synchronous" language, meaning operations in Javascript block other operations from executing until they are complete, creating a sense of single directional flow. **This means that only one operation can happen at a time.** However, in order to maximize efficiency (save time and resources), Javascript allows the use of asynchronous functions. Async functions allow other operations to continue running as they are being executed. Refer to [this blog](https://dev.to/hardy613/asynchronous-vs-synchronous-programming-23ed) for more information. Promises are sychnronous objects, similar to their real life meaning, return a value at some point in the future, or a reason for why that value could not be returned - they represent the result of an async function that may or may not be resolved. > [Is JavaScript Synchronous or Asynchronous? What the Heck is a Promise?](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) > [Master the JavaScript Interview: What is a Promise?](https://medium.com/better-programming/is-javascript-synchronous-or-asynchronous-what-the-hell-is-a-promise-7aa9dd8f3bfb)
❗️ Specify the parameters for your request!
In order to specify all of our parameters easily, we're going to create a new `URLSearchParams` object. Here's the object declared for you. I've also already specified one parameter, `returnFaceId`, as `true` to provide an example. Add in a new parameter that requests emotion. Remember, these parameters are coming from the [Face API documentation](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236)! Screen Shot 2021-05-16 at 9 20 19 PM ```js let params = new URLSearchParams({ 'returnFaceId': 'true', '': '' //FILL IN THIS LINE }) ```

:hammer_and_wrench: Calling the API

In the same way we installed parse-multipart, install node-fetch. Read the API section of the documentation. We're going to make a call using the fetch(url, {options}) function.

To call the fetch function - use the await keyword, which we need because fetch returns a Promise, which is a proxy for a value that isn't currently known. You can read about Javascript promises here.

API Documentation can be tricky sometimes...Here's something to help

:bulb: Request Headers tell the receiving end of the request what type of data is in the body.

❓ How do I specify Request Headers?
Go back to the Face API documentation [here](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), and find the **Request headers** section. There are two headers that you need. I've provided the format below. Enter in the two header names and their two corresponding values. FYI: The `Content-Type` header should be set to`'application/octet-stream'`. This specifies a binary file. ```js //COMPLETE THE CODE let resp = await fetch(uriBase + '?' + params.toString(), { method: '', //WHAT TYPE OF REQUEST? body: '', //WHAT ARE WE SENDING TO THE API? //ADD YOUR TWO HEADERS HERE headers: { '
': '
' } }) ```
❓ What is the URL?
Notice that the URL is just the uriBase with the params we specified earlier appended on. `const uriBase = '' + '/face/v1.0/detect';`
❓ Still Confused? Fill in the `method` and `body`. ```js async function analyzeImage(img){ const subscriptionKey = ''; const uriBase = '' + '/face/v1.0/detect'; let params = new URLSearchParams({ 'returnFaceId': 'true', 'returnFaceAttributes': 'emotion' }) //COMPLETE THE CODE let resp = await fetch(uriBase + '?' + params.toString(), { method: '', //WHAT TYPE OF REQUEST? body: '', //WHAT ARE WE SENDING TO THE API? headers: { '
': '
' //do this in the next section } }) let data = await resp.json(); return data; } ```

⏬ Receiving Data

Call the analyzeImage function in module.exports. Add the code below into module.exports.

Remember that parts represents the parsed multipart form data. It is an array of parts, each one described by a filename, a type and a data. Since we only sent one file, it is stored in index 0, and we want the data property to access the binary file– hence parts[0].data. Then in the HTTP response of our Azure function, we store the result of the API call.

//module.exports function
//analyze the image
var result = await analyzeImage(parts[0].data);
context.res = {
    body: {
        result
    }
};
console.log(result)
context.done();