emsesc / test-week2

0 stars 1 forks source link

Week 2 #3

Open ghost opened 3 years ago

ghost commented 3 years ago

Week 2 Step 1

Parsing Multipart

In your Azure Portal, go into your HTTP function and into the index.js file. You should see starter code there that has something like module.exports.... This is the file that we will be editing.

The Azure Function needs to:

  1. Receive and parse an image from a webpage
  2. Call the Face API and analyze the image

flowchart

What is happening in this flowchart?

HTML Page: Where the user will submit an image; this page will also make a request with the image to the Azure Function.

Azure Function: Receives the request from HTML Page with the image. In order to receive it, we must parse the image...

Concepts to know:

We're going to be focusing on Part 1, which involves parsing multipart form data.

πŸ“ Task 1: Analyze image input using the Face API

❓How do I parse image data?
In any HTML `
` element that receives involves a file upload (which ours does), the data is encoded in the `multipart/form-data` method. The default http encoding method is `application/x-www-form-urlencoded`, which encodes text into name/value pairs and works for text inputs, but it is inefficient for files or binary inputs. `multipart/form-data` indicates that one or more files are being inputted. Parsing this type of data is a little more complicated than usual. To simplify the process, we're going to use an npm library called `parse-multipart`.
❓How do I install parse-multipart?
To import Node packages, we use the `require` function: ```js var multipart = require("parse-multipart"); ``` This imports the `parse-multipart` package into our code, and we can now call any function in the package using `multipart.Function()`. Your function should look like this: ```js var multipart = require("parse-multipart"); module.exports = async function (context, req) { context.log('JavaScript HTTP trigger function processed a request.'); }; ```


❓What data is needed to parse the image?
Go to https://www.npmjs.com/package/parse-multipart for documentation and context on what the package does. Look specifically at the example in the "Usage" section and what they are doing, as we're going to do something similar. Notice that `multipart.Parse(body, boundary)` requires two parameters. I've already gotten the boundary for you – just like the documentation example, our boundary is a string in the format `"----WebKitFormBoundary(random characters here)"`. In the `multipart.Parse()` call, you need to figure out what the body parameter should be. Hint: It should be the request body. Think about the example Azure function. How did we access that? ```js //here's your boundary: var boundary = multipart.getBoundary(req.headers['content-type']); // TODO: assign the body variable the correct value var body = '' // parse the body var parts = multipart.Parse(body, boundary); ```


πŸ•οΈ To move on, commit your new code to this repository and add your function URL to a repository secret named FUNCTION_URL.

Once it successfully parses an image, you can move on!

ghost commented 3 years ago

Week 2 Step 2

Checkpoint #1

Congrats! You've made it to the first checkpoint. We're going to make sure your Azure Function can complete its first task: parsing the image.

  1. In order to make sure your function is correctly receiving and parsing the image, we are going to be adding a line of code in the module.exports function to print out the image data in our Azure console.

Hint: take a look at the var parts... line

Need an explanation?
* Use context.log() to print to console * Our last line of code, `var parts = multipart.Parse(body, boundary);`, stored our parsed image... * We only have one image, so let's access it in the array with the index of 0: `parts[0]` Final line of code: `context.log(parts[0]);`
  1. Let's use our Postman skills to make a POST request to the Azure Function.

Untitled_ Nov 11, 2020 6_24 PMpo

httptrigger - Microsoft Azure

Untitled_ Nov 11, 2020 6_40 PM

image

Remember, we are using Postman to test the request before we go ahead and create the website that will allow us to upload an image and make the request. If you come across any errors here, make sure to debug them (Google is a great resource) before moving forwards.

:camping: To move on, comment a screenshot of your console.

emsesc commented 3 years ago

hi

ghost commented 3 years ago

Week 2 Step 3

πŸ“ Task 3: Create a Face API Endpoint

In this issue, you will be creating an instance of the 'Face' Resource in Azure.

❓ What does the Face API do?
The Face API will accept the image and return information about the face, specifically emotions. Watch this video on Microsoft Cognitive Services for an in-depth explanation: http://www.youtube.com/watch?v=2aA8OEZ1wk8


❓ Where do I begin?
1. Log into your Azure portal, press **Create a Resource**, the **AI + Machine Learning** tab on the left, and finally select **Face** and fill out the necessary information.Record and save the API endpoint and [subscription key](https://docs.microsoft.com/en-us/azure/api-management/api-management-subscriptions)


❓ Where can I find the keys?
* Navigate to the home page on the Micrsoft Azure portal (https://portal.azure.com/#home) Screen Shot 2021-02-04 at 4 00 33 PM * Click on the resource you need the keys for Screen Shot 2021-02-04 at 4 00 49 PM * On the left menu bar, locate the Resource Management section and click on "Keys and Endpoint" Screen Shot 2021-02-04 at 12 26 36 PM
## :camping: To move on, place your two secrets in the repository secrets: `API_ENDPOINT` AND `SUBSCRIPTION_KEY`
ghost commented 3 years ago

files: index.js stepType: PRmerge scripts: n/a week: 2 step: 4 name: Week 2 Step 4

Week 2 Step 4

Call the Face API P1: Setting Params

Recall the purpose of this Azure Function:

  1. Receive and parse an image from a webpage
  2. Call the Face API and analyze the image

faceapi

What is happening in this flowchart?

Azure Function: We are now going to be creating parameters that the Azure Function will use to send a request to the Face API Endpoint we just created.

Concepts to know:

In this section, we'll be focusing on Part 2.

At this point, your Azure function should look like this:

var multipart = require("parse-multipart");

module.exports = async function (context, req) {
    context.log('JavaScript HTTP trigger function processed a request.'); 

    var boundary = multipart.getBoundary(req.headers['content-type']);

    var body = req.body;

    var parts = multipart.Parse(body, boundary);
};

πŸ“ Task 4: Create a new function and set the parameters of your POST request.

❓ Where do I begin?
Create a new function, outside of `module.exports` that will handle analyzing the image (this function is **async** because we will be using the **await** keyword with the API call). This function will be called `analyzeImage(img)` and takes in one parameter, `img`, that contains the image we're trying to analyze. Inside, we have two variables involved in the call: `subscriptionKey` and `uriBase`. Substitute the necessary values with your own info. ```js async function analyzeImage(img){ const subscriptionKey = process.env.SUBSCRIPTIONKEY; const uriBase = process.env.ENDPOINT + '/face/v1.0/detect'; } ```


❓ What are the parameters for the request?
The documentation for the Face API is here: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236. Read through it, and notice that the request url is this: `https://{endpoint}/face/v1.0/detect\[?returnFaceId]\[&returnFaceLandmarks]\[&returnFaceAttributes]\[&recognitionModel]\[&returnRecognitionModel][&detectionModel]` All of the bracketed sections represent possible request parameters. Read through **Request Parameters** section carefully. How can we specify that we want to get the emotion data?


❓How do I specify parameters?
In order to specify all of our parameters easily, we're going to create a new `URLSearchParams` object. Here's the object declared for you. I've also already specified one parameter, `returnFaceId`, as `true` to provide an example. Add in a new parameter that requests emotion. ```js let params = new URLSearchParams({ 'returnFaceId': 'true', '': '' //FILL IN THIS LINE }) ```


πŸ•οΈ To move on, create and merge a pull request with your new additions of code to index.js.

:exclamation: Remember to use environment variables for your secrets! These can be referenced using process.env.[your variable name]

ghost commented 3 years ago

files: n/a stepType: checks scripts: test.emotiondata.js week: 2 step: 5 name: Week 2 Step 5

Week 2 Step 5

Call the Face API P2: Using Fetch

fetchapi

What is happening in this flowchart?

There are many ways to make a POST request, but to stay consistent, we're going to use the package node-fetch. This package makes HTTP requests in a similar format as what we're going to use for the rest of the project.

Concepts to know:

πŸ“ Task 5: Use Fetch in order to send a post request to the Face API Endpoint and receive emotion data

❓ Where do I begin?
**Install the package using the same format we did for `parse-multipart`.** ```js //install the node-fetch pacakge var fetch = '' ``` When you've finished installing, read through the [**API** section](https://www.npmjs.com/package/node-fetch#api) of the documentation. We're going to make a call using the `fetch(url, {options})` function. > API Documentation can be tricky sometimes...Here's something to [help](https://learn.parabola.io/docs/reading-api-docs)


❓How do I use Fetch?
We're calling the `fetch` function - notice the **await** keyword, which we need because `fetch` returns a **Promise**, which is a proxy for a value that isn't currently known. You can read about Javascript promises [here](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise). Javascript is what we call a "synchronous" language, meaning operations in Javascript block other operations from executing until they are complete, creating a sense of single directional flow. This means that only one operation can happen at a time. However, in order to maximize efficiency (save time and resources), Javascript allows the use of asynchronous functions.


❓ What is an async function?
Simply put, async functions allow other operations to continue running as they are being executed. Refer to [this site](https://dev.to/hardy613/asynchronous-vs-synchronous-programming-23ed) for more information. Promises are sychnronous objects, similar to their real life meaning, return a value at some point in the future, or a reason for why that value could not be returned - they represent the result of an async function that may or may not be resolved. > [Is JavaScript Synchronous or Asynchronous? What the Hell is a Promise?](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) > [Master the JavaScript Interview: What is a Promise?](https://medium.com/better-programming/is-javascript-synchronous-or-asynchronous-what-the-hell-is-a-promise-7aa9dd8f3bfb)


❓ What is the URL?
Notice that the URL is just the uriBase with the params we specified earlier appended on. For now, fill in the `method` and `body`. ```js async function analyzeImage(img){ const subscriptionKey = ''; const uriBase = '' + '/face/v1.0/detect'; let params = new URLSearchParams({ 'returnFaceId': 'true', 'returnFaceAttributes': 'emotion' }) //COMPLETE THE CODE let resp = await fetch(uriBase + '?' + params.toString(), { method: '', //WHAT TYPE OF REQUEST? body: '', //WHAT ARE WE SENDING TO THE API? headers: { '
': '
' //do this in the next section } }) let data = await resp.json(); return data; } ```


❓How do I specify Request Headers?
Go back to the Face API documentation [here](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), and find the **Request headers** section. There are two headers that you need. I've provided the format below. Enter in the two header names and their two corresponding values. FYI: The `Content-Type` header should be set to`'application/octet-stream'`. This specifies a binary file. ```js //COMPLETE THE CODE let resp = await fetch(uriBase + '?' + params.toString(), { method: '', //WHAT TYPE OF REQUEST? body: '', //WHAT ARE WE SENDING TO THE API? //ADD YOUR TWO HEADERS HERE headers: { '
': '
' } }) ```


❓How do I actually analyze the image?
Call the `analyzeImage` function in `module.exports`. Add the code below into `module.exports`. Remember that `parts` represents the parsed multipart form data. It is an array of parts, each one described by a filename, a type and a data. Since we only sent one file, it is stored in index 0, and we want the `data` property to access the binary file– hence `parts[0].data`. Then in the HTTP response of our Azure function, we store the result of the API call. ```js //module.exports function //analyze the image var result = await analyzeImage(parts[0].data); context.res = { body: { result } }; console.log(result) context.done(); ```


πŸ•οΈ To move on, commit your code to the repository!

Again, make sure to not commit any sensitive keys/tokens, and instead use environment variables. We will be checking your function everytime you commit.

ghost commented 3 years ago

files: n/a stepType: IssueComment scripts: n/a week: 2 step: 6 name: Week 2 Step 6

Week 2 Step 6

Checkpoint 2

Time to test our completed Azure Function! It should now successfully do these tasks:

  • Parse the image
  • Send image to Face API and return Face data

This time, we won't need to add any additional code, as the completed function should return the emotion data on its own.

πŸ“ Task 6: Test your function to see if it outputs face data!

:exclamation: These are the same steps as Checkpoint #1. Click here if you need them!
* Navigate back to the Postman Chrome extension app and change GET to POST ![Untitled_ Nov 11, 2020 6_24 PM](https://user-images.githubusercontent.com/69332964/98876201-c3bca780-244b-11eb-9b94-8d3cecc80115.gif) * Copy your function's url from the Azure Function App portal like this: ![httptrigger - Microsoft Azure](https://user-images.githubusercontent.com/69332964/98876502-6f65f780-244c-11eb-832b-a25888b980da.gif) * Use the function url and any image you want to send the POST request. Remember to attach the file in Body! ![Untitled_ Nov 11, 2020 6_40 PM](https://user-images.githubusercontent.com/69332964/98876997-780afd80-244d-11eb-87fc-13822d909f2f.gif)

Only difference is that we should receive an output in Postman instead:

Make sure you're using an image with a real face on it or else it won't work. Here's an example of an output I get with this image:

image Credits: https://thispersondoesnotexist.com/

{
  "result": [
    {
      "faceId": "d25465d6-0c38-4417-8466-cabdd908e756",
      "faceRectangle": {
        "top": 313,
        "left": 210,
        "width": 594,
        "height": 594
      },
      "faceAttributes": {
        "emotion": {
          "anger": 0,
          "contempt": 0,
          "disgust": 0,
          "fear": 0,
          "happiness": 1,
          "neutral": 0,
          "sadness": 0,
          "surprise": 0
        }
      }
    }
  ]
}

:camping: To move on, comment the output you received from the POST request!

emsesc commented 3 years ago

done

ghost commented 3 years ago

Week 2 Step 7

Setting up the Twilio API

Twilio is a cloud communication platform used to programmatically send SMS messages, calls, etc. You will be using the Twillio API to send yourself information from the Face Data you obtained in the last issue.

πŸ“ Task 7: Enable your Twilio account to send messages

You will need to buy a new number on Twilio. On a free trial, you have to verify each number that you send messages to or call. Set up phone number to be verified on your Twilio account.

Note: Don't worry about paying for anything, you will be receiving 10 dollars of credit from Twilio, which is more than enough for this project. Numbers should only cost 1 dollar.

❓ Where do I sign up?
Go to the [twilio website](https://www.twilio.com/try-twilio), and create an account here.
❓ How do I set up a new number?
Navigate to your dashboard, then press the three dots on the navigation panel to the left of your screen. Click on the 'phone numbers' option, then press the blue button in the top right corner to buy your own number, preferably from your location.
❓ How do I verify my number?
Hint: Go back to `phone numbers` in your dashboard and browse the options.

πŸ•οΈ To move on, comment your favorite form of communiation!

emsesc commented 3 years ago

done

ghost commented 3 years ago

Week 2 Step 8

Connecting Locally to the Twillio API

It is time to set up your local enviroment! You will need to install the necessary package(s) for the Twilio API to work in your new function.

πŸ“ Task 8: Set up your local environment

❓ Where do I begin?
Create a new directory on your computer, make an HTTP function with a node runtime and copy and paste the Face API code from the previous issue.
❓ How do I install Twilio?
In your project directory, initialize npm, then use the command `npm install twilio` to add the twilio API to your local environment.

πŸ•οΈ To move on, create a PR with your new files!

Make sure to include your node-modules.json file that shows what packages are necessary to run your code.

ghost commented 3 years ago

files: index.js stepType: PRmerge scripts: n/a week: 2 step: 9 name: Week 2 Step 9

Week 2 Step 9

Calling the Twillio API

Now that your account and coding environments are set up, it is time to get to work! As a reminder, you are using the Twilio API to send Face API data as a text message. Connect your Twilio profile to your local environment then send a message by calling the API. Your message should describe the emotion on an inputted image based on Face API results, such as "the person in your inputted image is sad".

Hint: all of this can be done programmatically!

πŸ“ Task 9: Call the Twilio API

❓ Where do I begin?
In your function but outside of the module, declare 3 constants: one for your account SID, another for your auth token, and the last for the twilio client.
❓ Where do I find the auth token and account SID?
The auth token and account SIDs are found on your [Twilio dashboard](https://www.twilio.com/) console.
❓ How do I import Twilio?
Check you that you have installed Twilio through npm by typing `twilio --version` in your terminal. Afterwards, instantiate the `client` constant you declared to require the Twilio package. Pass in the account SID and auth token as parameters.
❓ What function do I call?
Look at the functions your client can perform. If you are stuck, you can refer to the Twilio API at https://www.twilio.com/docs/sms.
❓ What parameters do I need in my Twilio API call?
Make sure to at least have a `body` (the message you are sending) and the `from` (your twilio phone number) in the function.
❓ Need help navigating JSON?
You already have all the code to obtain this information, along with some other pieces of data. To isolate the emotion, look at the format of the returned information and pick out what you need. For additional help, use [this resource](https://stackoverflow.com/questions/10368171/how-to-extract-a-json-object-thats-inside-a-json-object)!

:camping: To move on, create and merge a PR with your working Azure function's updated files.

Note: Make sure to instantiate constants handling your account information environment variables rather than the actual values. You can do this using process.env.