vlab-kaist / NN101_23S

MIT License
6 stars 7 forks source link

[LAB] Week 3_Problem 1_이현석 #98

Closed RadioActiveBlackTi closed 1 year ago

RadioActiveBlackTi commented 1 year ago

Problem

Week 3_Problem 1

Source Code

import torch
import torch.nn.functional as F
from random import random
from typing import Callable

##                         Problem 1                          ##
##                                                            ##
##            Arbitary x_train, y_train are given.            ##
##          In function predict(), you should return          ##
##            list y_test corresponding to x_test.            ##
##               y_train only contains 0 and 1.               ##
##             Therefore, use logstic regression.             ##
##                  Made by @jangyoujin0917                   ##
##                                                            ##

# NOTE : 1. Feel free to use torch.optim and tensor.
#        2. In this problem, we will only grade 'predict' function.
#           Function 'training' is only for modularization.

def training(x_train: list[list[float]], y_train: list[float]):  # DO NOT MODIFY FUNCTION NAME
    n = len(x_train[0])

    x = torch.FloatTensor(x_train)
    y = torch.FloatTensor(y_train)

    w = torch.tensor([1 for i in range(n)], requires_grad=True, dtype=torch.float32)
    b = torch.tensor([1], requires_grad=True, dtype=torch.float32)

    epochs = 10000
    optimizer = torch.optim.Adam([w, b], lr=1e-2)

    for i in range(epochs):
        lf = 1 / (1 + torch.exp(-w @ torch.transpose(x, 0, 1) + b))
        cost = (lf ** (1 - y) + (1 - lf) ** y).sum()

        optimizer.zero_grad()
        cost.backward()
        optimizer.step()
    # print(w,b)
    return w, b

def predict(x_train: list[list[float]], y_train: list[float], x_test: list[list[float]]) -> list[
    float]:  # DO NOT MODIFY FUNCTION NAME
    w, b = training(x_train, y_train)
    x = torch.FloatTensor(x_test)
    y_now = 1 / (1 + torch.exp(-w @ torch.transpose(x, 0, 1) + b))

    return list(y_now)

if __name__ == "__main__":
    # This is very simple case. Passing this testcase do not mean that the code is perfect.
    # Please consider for the practial problems when score is not high.
    x_train = [[0., 1.], [1., 0.], [2., 5.], [3., 1.], [4., 2.]]
    y_train = [0., 0., 1., 0., 1.]
    x_test = [[7., 2.], [1.5, 1.], [2.5, 0.5]]

    y_test = predict(x_train, y_train, x_test)

    print(y_test)

Description

ㅁ?ㄹ

Output (Optional)

No response

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of RadioActiveBlackTi {'RadioActiveBlackTi': 30.0}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of RadioActiveBlackTi Timeout reached for RadioActiveBlackTi {'RadioActiveBlackTi': 'timed_out'}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of RadioActiveBlackTi {'RadioActiveBlackTi': 44.0}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of RadioActiveBlackTi {'RadioActiveBlackTi': 44.0}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Your code failed to run. Please check again.

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of RadioActiveBlackTi {'RadioActiveBlackTi': 0.0}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Your code failed to run. Please check again.

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of RadioActiveBlackTi {'RadioActiveBlackTi': 80.0}

jangyoujin0917 commented 1 year ago

Good job!

RadioActiveBlackTi commented 1 year ago

I think finding appropriate hyperparmeters is too hard...