Open ghost opened 3 years ago
Congrats! You've made it to the first checkpoint. We're going to make sure your Azure Function can complete its first task: parsing the image.
Hint: take a look at the var parts...
line
po
Remember, we are using Postman to test the request before we go ahead and create the website that will allow us to upload an image and make the request. If you come across any errors here, make sure to debug them (Google is a great resource) before moving forwards.
hi
In this issue, you will be creating an instance of the 'Face' Resource in Azure.
Recall the purpose of this Azure Function:
What is happening in this flowchart?
Azure Function: We are now going to be creating parameters that the Azure Function will use to send a request to the Face API Endpoint we just created.
In this section, we'll be focusing on Part 2.
At this point, your Azure function should look like this:
var multipart = require("parse-multipart");
module.exports = async function (context, req) {
context.log('JavaScript HTTP trigger function processed a request.');
var boundary = multipart.getBoundary(req.headers['content-type']);
var body = req.body;
var parts = multipart.Parse(body, boundary);
};
index.js
.process.env.[your variable name]
What is happening in this flowchart?
There are many ways to make a POST request, but to stay consistent, we're going to use the package node-fetch
. This package makes HTTP requests in a similar format as what we're going to use for the rest of the project.
parts[0]
)'
```
When you've finished installing, read through the [**API** section](https://www.npmjs.com/package/node-fetch#api) of the documentation. We're going to make a call using the `fetch(url, {options})` function.
> API Documentation can be tricky sometimes...Here's something to [help](https://learn.parabola.io/docs/reading-api-docs)
Again, make sure to not commit any sensitive keys/tokens, and instead use environment variables. We will be checking your function everytime you commit.
Time to test our completed Azure Function! It should now successfully do these tasks:
This time, we won't need to add any additional code, as the completed function should return the emotion data on its own.
Only difference is that we should receive an output in Postman instead:
Make sure you're using an image with a real face on it or else it won't work. Here's an example of an output I get with this image:
Credits: https://thispersondoesnotexist.com/
{
"result": [
{
"faceId": "d25465d6-0c38-4417-8466-cabdd908e756",
"faceRectangle": {
"top": 313,
"left": 210,
"width": 594,
"height": 594
},
"faceAttributes": {
"emotion": {
"anger": 0,
"contempt": 0,
"disgust": 0,
"fear": 0,
"happiness": 1,
"neutral": 0,
"sadness": 0,
"surprise": 0
}
}
}
]
}
done
Twilio is a cloud communication platform used to programmatically send SMS messages, calls, etc. You will be using the Twillio API to send yourself information from the Face Data you obtained in the last issue.
You will need to buy a new number on Twilio. On a free trial, you have to verify each number that you send messages to or call. Set up phone number to be verified on your Twilio account.
Note: Don't worry about paying for anything, you will be receiving 10 dollars of credit from Twilio, which is more than enough for this project. Numbers should only cost 1 dollar.
done
It is time to set up your local enviroment! You will need to install the necessary package(s) for the Twilio API to work in your new function.
Make sure to include your node-modules.json file that shows what packages are necessary to run your code.
Now that your account and coding environments are set up, it is time to get to work! As a reminder, you are using the Twilio API to send Face API data as a text message. Connect your Twilio profile to your local environment then send a message by calling the API. Your message should describe the emotion on an inputted image based on Face API results, such as "the person in your inputted image is sad".
Hint: all of this can be done programmatically!
Note: Make sure to instantiate constants handling your account information environment variables rather than the actual values. You can do this using
process.env
.
Week 2 Step 1
Parsing Multipart
In your Azure Portal, go into your HTTP function and into the index.js file. You should see starter code there that has something like
module.exports...
. This is the file that we will be editing.The Azure Function needs to:
What is happening in this flowchart?
HTML Page: Where the user will submit an image; this page will also make a request with the image to the Azure Function.
Azure Function: Receives the request from HTML Page with the image. In order to receive it, we must parse the image...
Concepts to know:
We're going to be focusing on Part 1, which involves parsing multipart form data.
π Task 1: Analyze image input using the Face API
βHow do I parse image data?
In any HTML `βWhat data is needed to parse the image?
Go to https://www.npmjs.com/package/parse-multipart for documentation and context on what the package does. Look specifically at the example in the "Usage" section and what they are doing, as we're going to do something similar. Notice that `multipart.Parse(body, boundary)` requires two parameters. I've already gotten the boundary for you β just like the documentation example, our boundary is a string in the format `"----WebKitFormBoundary(random characters here)"`. In the `multipart.Parse()` call, you need to figure out what the body parameter should be. Hint: It should be the request body. Think about the example Azure function. How did we access that? ```js //here's your boundary: var boundary = multipart.getBoundary(req.headers['content-type']); // TODO: assign the body variable the correct value var body = 'ποΈ To move on, commit your new code to this repository and add your function URL to a repository secret named
FUNCTION_URL
.Once it successfully parses an image, you can move on!