aashrayr / student2

MIT License
1 stars 0 forks source link

ML Project #9

Open aashrayr opened 7 months ago

aashrayr commented 7 months ago

I worked with Aashray and Kyle to develop a new front end, and also a facial recognition feature, which allows for the age and gender to be determined.

We can see the option to add an image image When we add a reference photo we can see the survival probabiliy. image

function calculateSurvival() {
    // Extract data from user input
    extractData()
        .then((AIData) => {
            // Construct passenger data object
            const passengerData = {
                name: document.getElementById("name").value,
                pclass: parseInt(getCheckedCheckboxValue("pclass")),
                sex: AIData[0],
                age: AIData[1],
                sibsp: parseInt(document.getElementById("sibsp").value),
                parch: parseInt(document.getElementById("parch").value),
                fare: parseFloat(document.getElementById("fare").value),
                embarked: getCheckedCheckboxValue("embarked"),
                alone: document.getElementById("alone").value === "true" ? true : false,
            };

            // Prepare request body
            const requestData = {
                passenger: passengerData,
            };

            // Set options for fetch request
            const requestOptions = {
                method: "POST",
                cache: "no-cache",
                body: JSON.stringify(requestData),
                headers: {
                    "Content-Type": "application/json",
                    "Access-Control-Allow-Origin": "include",
                },
            };

            // Send POST request to backend
            fetch(url, requestOptions)
                .then((response) => {
                    // Check for errors in response
                    if (!response.ok) {
                        console.error("Failed to fetch data:", response.status);
                        return;
                    }
                    return response.json();
                })
                .then((data) => {
                    // Update UI with survival probability
                    const survivalProbability = data[0];
                    const survivalElement = document.getElementById("survival");
                    survivalElement.textContent = survivalProbability;
                })
                .catch((err) => {
                    // Handle errors
                    console.error("Error:", err);
                });
        })
        .catch((err) => {
            // Handle errors
            console.error("Error:", err);
        });
}

Our code initiates by executing the extractData() function. This function handles the pre-processing steps for the image data.

// image deepface data extraction
function extractData() {
    // Display loading animation
    document.getElementById("body").style.filter = "blur(20px)";
    document.getElementById("loader").style.display = "block";

    // Return a promise for asynchronous handling
    return new Promise((resolve, reject) => {
        // Extract base64 encoded image data
        var image = document.getElementById("photo");
        convertImageToBase64(image)
            .then((b64) => {
                // Prepare image data for API request
                const post_options = {
                    method: "POST",
                    cache: "no-cache",
                    body: JSON.stringify(b64),
                    headers: {
                        "Content-Type": "application/json",
                        "Access-Control-Allow-Origin": "include",
                    },
                };

                // Send image data to the recognition API for analysis
                fetch(imageRecognition, post_options)
                    .then((response) => {
                        if (!response.ok) {
                            // Handle errors in API response
                            const errorMsg = response.status;
                            console.log(errorMsg);
                            reject(new Error(`Failed to fetch image recognition data: ${errorMsg}`));
                            return;
                        }
                        return response.json();
                    })
                    .then((data) => {
                        // Hide loading animation and return processed data
                        document.getElementById("loader").style.display = "none";
                        document.getElementById("body").style.filter = "blur(0px)";
                        resolve(data);
                    })
                    .catch((err) => {
                        // Handle errors during API call
                        reject(err);
                    });
            })
            .catch((err) => {
                // Hide loading animation and handle errors in image processing
                document.getElementById("loader").style.display = "none";
                document.getElementById("body").style.filter = "blur(0px)";
                reject(err);
            });
    });
}

This function manages the loading animation, retrieves the base64 encoded image data, and sends it to the recognition API for analysis. After receiving the processed data, it resolves the promise with the extracted information.

Model Behind the Scenes:

The model behind the scenes loads the age prediction model and utilizes it to make predictions based on the image data.

def __init__(self):
        self.model = load_model()
        self.model_name = "Age"

    def predict(self, img: np.ndarray) -> np.float64:
        age_predictions = self.model.predict(img, verbose=0)[0, :]
        return find_apparent_age(age_predictions)

The load_model function constructs the age prediction model, downloads its weights, and loads them. The model is then returned for prediction.

After analyzing the image data, the post-processing step extracts relevant information such as age and gender and returns it for further processing.

# data extraction
data = information[0]  # get the first element
age = data["age"]  # age extraction

bothGenders = data["gender"]  # get both gender confidence rates
woman = bothGenders["Woman"]  # woman confidence
man = bothGenders["Man"]  # man confidence

# based on the probabilities, find which is larger and return that
gender = None
if woman > man:
    gender = "Female"
elif woman < man:
    gender = "Male"

# the order is very important (must be gender THEN age)
returnData = [gender, age]  # create a list and send it

return returnData  # return

This step involves extracting age and gender information from the analysis results, determining the predominant gender based on probabilities, and packaging the data for further use.

The final step involves sending the extracted data, both from the image analysis and user inputs, to the Titanic model for survival prediction.

function calculateSurvival() {
    extractData()
        .then((AIData) => {
            // Gather passenger data from user inputs and image analysis
            const passengerData = {
                name: document.getElementById("name").value,
                pclass: parseInt(getCheckedCheckboxValue("pclass")), // Get passenger class
                sex: AIData[0], // Get gender from image analysis
                age: AIData[1], // Get age from image analysis
                sibsp: parseInt(document.getElementById("sibsp").value), // Get number of siblings/spouse aboard
                parch: parseInt(document.getElementById("parch").value), // Get number of parents/children aboard
                fare: parseFloat(document.getElementById("fare").value), // Get fare
                embarked: getCheckedCheckboxValue("embarked"), // Get embarked location
                alone: document.getElementById("alone").value === "true" ? true : false, // Check if passenger is alone
            };

            // Prepare request body
            const body = {
                passenger: passengerData,
            };

            // Set options for fetch request
            const post_options = {
                method: "POST",
                cache: "no-cache",
                body: JSON.stringify(body),
                headers: {
                    "Content-Type": "application/json",
                    "Access-Control-Allow-Origin": "include",
                },
            };

            // Send POST request to Titanic model
            fetch(url, post_options)
                .then((response) => {
                    if (!response.ok) {
                        // Handle errors in response
                        const errorMsg = response.status;
                        console.log(errorMsg);
                        return;
                    }

                    return response.json();
                })
                .then((data) => {
                    // Update UI with survival prediction
                    const survivalProbability = data[0];
                    const h1 = document.getElementById("survival");
                    h1.textContent = survivalProbability;
                })
                .catch((err) => {
                    // Handle fetch errors
                    console.error(err);
                });
        })
        .catch((err) => {
            // Handle pre-processing errors
            console.error(err);
        });
}

This function gathers user inputs and image analysis data to create a passenger profile. It then sends this data to the Titanic model for survival prediction, updating the UI with the result.

My Key Portions

This code defines a class Recognition with a recognize method that processes a base64 encoded image to extract information like age and gender. It decodes the image, analyzes it to determine age and gender using a specific module, and then returns the extracted data.

class Recognition:
    def recognize(self, base64_encoded):
        # image parsing in preparation
        b64_string = base64_encoded  # ["base64_encoded"]  # remove from dictionary

        if b64_string.startswith("data:image"):  # if headers --> remove
            b64_string = b64_string.split(",", 1)[1]

        image_data = base64.b64decode(b64_string)  # decoding

        image = Image.open(io.BytesIO(image_data))  # get the image

        image_np = np.array(image)  # convert to np array

        # analysis
        attributes = ["age", "gender"]  # which attributes
        information = dp.analyze(image_np, attributes)  # analyze

        # -------------------------------------------------------------------------------- #

        # data extraction
        data = information[0]  # get the first element
        age = data["age"]  # age extraction

        bothGenders = data["gender"]  # get both gender confidence rates
        woman = bothGenders["Woman"]  # woman confidence
        man = bothGenders["Man"]  # man confidence

        # based on the probabilities, find which is larger and return that
        gender = None
        if woman > man:
            gender = "Female"
        elif woman < man:
            gender = "Male"

        # the order is very important (must be gender THEN age)
        returnData = [gender, age]  # create a list and send it

        return returnData  # return