vlab-kaist / NN101_23S

MIT License
6 stars 7 forks source link

[LAB] Week 2_Problem 1_박하원 #69

Closed peppermint-herb closed 1 year ago

peppermint-herb commented 1 year ago

Problem

Week 2_Problem 1

Source Code

import torch
from random import random 
from typing import Callable

##                         Problem 1                          ##
##                                                            ##
##            Arbitary x_train, y_train are given.            ##
##   Suppose that x and y have linear correlation, y=wx+b.    ##
##     In function training(), you should return [w, b].      ##
##          In function predict(), you should return          ##
##            list y_test corresponding to x_test.            ##
##                  Made by @jangyoujin0917                   ##
##                                                            ##

# NOTE : Feel free to use torch.optim and tensor.

def training(x_train : list, y_train : list) -> list: # DO NOT MODIFY FUNCTION NAME
    # Data normalization code (prevents overflow when calculating MSE, prevents underfitting)
    # Note that you need to convert [w, b] to the original scale before returning value
    # w = w * (y_max - y_min)
    # b = b * (y_max - y_min) + y_min
    y_min = min(y_train)
    y_max = max(y_train)
    normalize = lambda y : (y - y_min)/(y_max - y_min)

    normalized_y_train = [normalize(y) for y in y_train]

    w = torch.tensor(1., requires_grad=True)
    b = torch.tensor(0., requires_grad=True)

    alpha = 0.01
    epoch = 10000

    x_train_tensor = torch.tensor(x_train, requires_grad=True)
    y_train_tensor = torch.tensor(normalized_y_train, requires_grad=True)
    optimizer = torch.optim.Adam([w, b], lr=alpha)

    for _ in range(epoch):
        optimizer.zero_grad()

        hypothesis = w * x_train_tensor + b
        error = torch.mean((hypothesis - y_train_tensor) ** 2)
        error.backward()
        optimizer.step()

    return w.data.item() * (y_max - y_min), b.data.item() * (y_max - y_min) + y_min

def predict(x_train : list, y_train : list, x_test : list) -> list: # DO NOT MODIFY FUNCTION NAME
    w, b = training(x_train, y_train)

    temp = torch.tensor(w) * torch.tensor(x_test) + torch.tensor(b)

    return temp.tolist()

if __name__ == "__main__":
    x_train = [0.0, 1.0, 2.0, 3.0, 4.0]
    y_train = [2.0, 4.0, 6.0, 8.0, 10.0] # Note that not all test cases give clear line.
    x_test = [5.0, 10.0, 8.0]

    w, b = training(x_train, y_train)
    y_test = predict(x_train, y_train, x_test)

    print(w, b)
    print(y_test)

Description

.

Output (Optional)

No response

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Your code failed to run. Please check again.

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of peppermint-herb {'peppermint-herb': 30.9}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of peppermint-herb {'peppermint-herb': 30.9}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of peppermint-herb {'peppermint-herb': 30.9}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of peppermint-herb {'peppermint-herb': 30.9}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of peppermint-herb {'peppermint-herb': 0.0}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of peppermint-herb {'peppermint-herb': 30.9}

jangyoujin0917 commented 1 year ago

Since we don't know about the range of x_train and y_train, derivative of loss can be enormous value to update w and b. Therefore, in this code, weights and biases of test case 2, 3 diverges. I think you can fix this problem by adjusting the learning rate.

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of peppermint-herb {'peppermint-herb': 55.5}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of peppermint-herb {'peppermint-herb': 62.8}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of peppermint-herb {'peppermint-herb': 62.8}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of peppermint-herb {'peppermint-herb': 30.9}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of peppermint-herb {'peppermint-herb': nan}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of peppermint-herb {'peppermint-herb': nan}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of peppermint-herb {'peppermint-herb': nan}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of peppermint-herb {'peppermint-herb': nan}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of peppermint-herb {'peppermint-herb': 67.8}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of peppermint-herb {'peppermint-herb': 81.0}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of peppermint-herb {'peppermint-herb': 80.9}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of peppermint-herb {'peppermint-herb': 33.5}

jangyoujin0917 commented 1 year ago

This is an auto-generated grading output. Checking code of peppermint-herb {'peppermint-herb': 81.0}

Great!