Open github-learning-lab[bot] opened 3 years ago
Committed a basic HTML file to test my emotion reader azure function!
Javascript: In order to send and receive a request to the Azure Function, we must have Javascript to take these tasks.
Now that our image has been analyzed by the Face API, we have received emotion data in JSON format. Our task is to read the JSON file and output the emotions in the image (ie. anger, contempt, disgust, etc).
The first thing we need to do is create a function called loadFile(event) {}
which creates a variable called image
that gets the ID "output"
and displays the image the user uploaded. To do this, we need to get the element ID by using document.getElementByID(id)
:
var image = //Call for element ID here
We also need to call for the image source using .createObjectURL(event.target.files[0])
. Use .src
on the image variable and call for the first file uploaded using the code above. When it's all put together, it should look something like this:
function loadFile(event) {
var image = //set div for image output
image.src = //Load inputted image
};
Quick intermission:
Now we need to create our main function called handle(event) {}
. This will take in the response; then, using the data from the Face API, it will create a new form that includes the amount of different emotions. As well, using the data from the Face API, it will display numerical values of emotion.
Using jQuery, target the output element ID and change the content to equal "Loading". Then add the line, event.preventDefault();
to disable the ability to reload the page. To event target with jQuery, use this sample:
console.log ("submitting form...");
$(/*Add ID here*/).html(/*Add content here*/)
//make sure to disable reload here...
After telling our HTML to show the content "Loading", we need to set a few variables to create a new form with our emotion data. We can do this by adding a variable set to an element ID and creating a new FormData
object based on the received information:
var myform = document.getElementById('myform');
var payload = new FormData(myform);
const resp = await fetch(/*Add Function URL */, {
method: 'POST',
body: payload
});
Next, we have to add a variable for the JSON data. Make a new variable called data
and set it to the response JSON like we did earlier. If you need a hint, use the .json()
function. We also need another variable, emotion
, which will be set to data.result[0].faceAttributes.emotion;
. This sets emotion
to the first result in our JSON data, and pulls the information out into a value.
Just like we tested the Azure Function earlier by printing out something into our log, we will do the same with our JavaScript.
Objective: Check if the code successfully makes a request to the Azure Function and receives emotion data by logging the value in the variable emotion
.
Tip: Check the log using "Inspect element" or F12 and clicking on Console:
Lastly, we have to create the HTML that will actually be displayed:
i. First, create an <h3>
tag for the title labelled emotions in this image:
(make sure to add <br />
at the end to skip a line)
ii. Now create 8 <p>
tags that each show data for a different emotion. The list of emotions are anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise.
iii. To get the data, remember we set var emotion
to pull the first value. All we have to do is use jQuery and use this formatting. I've done the first one for you:
var resultString = `
<h3> Emotions in the image: </h3><br />
<p> anger: ${emotion.anger}</p>
`;
// Finish for other data types using the same format (i.e. ${emotion.contempt}, and etc)
The last thing we need to do is to use jQuery to change the emotion div, emotion
.
$('#emotion').html(/* Emotion Data should go here */);
To move on to the last step, commit your code and comment a description of what you completed!
Committed the backend C# code to print the data from Face API on the uploaded photo!
Now that we can display emotion data and run our function locally, let's try deploying it onto the web so that everyone can access your website remotely. We'll be doing this through Azure Static Web Apps.
Web Deployment is the process of deploying our code (in our case, our HTML, CSS, and JS) from the local computer to a remote hosting platform. When deployed, a website will be able to be accessed by other computers, and not just your own.
Azure Static Web Apps is a service that automatically builds and deploys full stack web apps to Azure from a GitHub repository. That means we can simply commit our code to GitHub, and Azure will take the code from the repo and deploy it in a custom domain.
Let's get started.
The first step would be to commit your code as a repo in GitHub, but in your case, your code should already be committed. Thus, we'll simply move on to deployment.
Installing Azure Static Web Apps VSCode Extension
In Visual Studio Code, open your GitHub repo.
Select the Extensions icon in the Activity Bar on the left. It should look like four squares, with the top right square separated from everyone else, like this:
Search for 'Azure Static Web Apps', and you should see the extension 'Azure Static Web Apps (Preview)'. Install this extension.
Creating an Azure Static Web App
Now, select the Azure logo in the Activity Bar on the left to open the Azure extensions window. It should look something like:
Note: Make sure you're signed into Azure and GitHub from Visual Studio Code. You should receive a prompt to sign in if you are not.
Hover over the STATIC WEB APPS (PREVIEW) label and click the plus sign to create a new one.
Enter a name for your new web app.
Select the master branch.
You will then see "Select the location of your application code". This is searching for the location of the API in your application, but we will not have one in our case. Click the "/", and then select "Skip for now" should appear. However, if you plan to add an API, you can read more about it here.
Now, you should see "Enter the path of your build output relative to your app's location". This is relevant if you are using certain frontend frameworks, but since we are using vanilla JS, we can simply clear the field and press Enter. However, if you are using a framework, such as Angular, React, or Vue, you can read more about its implementation at this link.
Select a location nearest you and press Enter.
Once your app has been created, a confirmation notification will appear in Visual Studio Code that says "Successfully created new static web app...".
Go back to the STATIC WEB APPS (PREVIEW) label and expand your subscription name underneath. You should see the name of your app. Right click and select Open in Portal to view the app in the Azure portal.
The portal should open in a new browser window with your web app displayed. Click the "URL" to see your web app now deployed remotely!
Congratulations, you've just deployed a web app using Azure Static Web Apps!
To officially complete this learning lab, please comment what you did this lesson.
Creating an HTML Page
It's time to begin working on the frontend of the application!
What is happening in this flowchart?
HTML Page: Begin coding the HTML and CSS frontend of the Emotion Reader that will directly interact with the user.
Concepts to know:
Important: For this week's issues, you must commit your code to this repository to move on to the next step. At each step, you should be completing the task and committing your solution code.
After watching the live demo, you should know the basics of how to create a simple website using the coding language, HTML, and some CSS if you want your webpage to look fancy. Now, your task is to create your own HTML page that inputs an image using a
<form>
and outputs the image's emotion data.If you still need some help learning HTML and CSS, checkout these resources:
Here's a list of HTML items you need to create (please use the id's specified)
header
element that says anything you want. For example, mine says Example Projectdiv
element with idcontainer
that will surround all of your elements.form
element with idimage-form
, withonsubmit="handle(event)"
. Set theenctype
attribute tomultipart/form-data
. Remember that for forms that receive file uploads, we need to specify this type of encoding.input
element that allows a file upload, where the user will upload an image.onChange
attribute to"loadFile(event)"
. Use theaccept
attribute to only allow image submissions. Finally, set thename
attribute toimage
.img
element with idoutput
. This is going to display the image that the user selects.button
element with thetype
attribute set tosubmit
. The text inside should say "Submit Picture" or something similar. This will submit the image.div
with the idemotion
. This is where the emotion analysis results will be displayed.Lastly, make sure to reference jQuery:
Checkpoint #3
Go live
as shown.After that, you're done with the frontend. It's time to use JavaScript!
To move on, commit your code and comment a description of what you completed!