vlab-kaist / NN101_23S

MIT License
6 stars 7 forks source link

[LAB] Week 2_Problem 1_한정진 #86

Closed Jeong-jin-Han closed 1 year ago

Jeong-jin-Han commented 1 year ago

Problem

Week 2_Problem 1

Source Code

import torch
from random import random 
from typing import Callable

##                         Problem 1                          ##
##                                                            ##
##            Arbitary x_train, y_train are given.            ##
##   Suppose that x and y have linear correlation, y=wx+b.    ##
##     In function training(), you should return [w, b].      ##
##          In function predict(), you should return          ##
##            list y_test corresponding to x_test.            ##
##                  Made by @jangyoujin0917                   ##
##                                                            ##

# NOTE : Feel free to use torch.optim and tensor.

def training(x_train : list[float], y_train : list[float]) -> list[float]: # DO NOT MODIFY FUNCTION NAME
    # Data normalization code (prevents overflow when calculating MSE, prevents underfitting)
    # Note that you need to convert [w, b] to the original scale before returning value
    # w = w * (y_max - y_min) // normalization 이후에 다시 원래 값으로 만들기 위해서서
    # b = b * (y_max - y_min) + y_min
    y_min = min(y_train)
    y_max = max(y_train)
    normalize = lambda y : (y - y_min)/(y_max - y_min) # normalization test zpdltm

    ### IMPLEMENT FROM HERE

    num = len(y_train)
    for i in range(num):
      y_train[i] = normalize(y_train[i])
    # print(y_train)

    # alpha = 0.05

    x_train = torch.tensor(x_train)
    y_train = torch.tensor(y_train)
    # 모델 초기화
    W = torch.zeros(1, requires_grad=True)
    b = torch.zeros(1, requires_grad=True)
    # optimizer 설정
    optimizer = torch.optim.SGD([W, b], lr=0.0000005)
 # lr : 기울기
 # epoch: #
    nb_epochs = 1999 # 원하는만큼 경사 하강법을 반복

    for epoch in range(nb_epochs + 1):

      # H(x) 계산
      hypothesis = x_train * W + b

      # cost 계산
      cost = torch.mean((hypothesis - y_train) ** 2)

      # cost로 H(x) 개선
      optimizer.zero_grad()
      cost.backward()
      optimizer.step()

      # 100번마다 로그 출력(정규화하기 전의 값이 나옴옴)
      # if epoch % 100 == 0:
      #     print("Epoch {:4d}/{} W: {:.3f}, b: {:.3f} Cost: {:.6f}".format(
      #         epoch, nb_epochs, W.item(), b.item(), cost.item()
      #     ))
    W = W * (y_max - y_min) # normalization 이후에 다시 원래 값으로 만들기 위해서서
    b = b * (y_max - y_min) + y_min

    ### normalization 안했을 경우 가능함.
    # num = len(y_train)
    # x_bar = sum(x_train)/num
    # y_bar = sum(y_train)/num
    # b1=0.0
    # b2=0.0
    # for i in range(num):
    #  b1 += y_train[i]*x_train[i]
    #  b2 += x_train[i]**2
    # b = (float) (b1-x_bar*(num)*y_bar)/(float) (b2-(num)*(x_bar)**2)
    # w = y_bar - b*x_bar
    # print(w,b)

    return (W.item(), b.item())

def predict(x_train : list[float], y_train : list[float], x_test : list[float]) -> list[float]: # DO NOT MODIFY FUNCTION NAME
    pass ### IMPLEMENT FROM HERE
    x_test = torch.tensor(x_test)
    w,b = training(x_train, y_train)
    y_test = x_test * w + b
    return y_test.tolist()

#if __name__ == "__main__":
#    x_train = [0.0, 1.0, 2.0, 3.0, 4.0]
#    y_train = [2.0, 4.0, 6.0, 8.0, 10.0] # Note that not all test cases give clear line.
#    x_test = [5.0, 10.0, 8.0]
#    
#    w, b = training(x_train, y_train)
#    y_test = predict(x_train, y_train)
#
#    print(w, b)
#    print(y_test)

# print(w, b)
# print(y_test)

Description

Problem 1

Arbitary x_train, y_train are given.

Suppose that x and y have linear correlation, y=wx+b.

In function training(), you should return [w, b].

In function predict(), you should return

list y_test corresponding to x_test.

Made by @jangyoujin0917

Output (Optional)

No response

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Your code failed to run. Please check again.

jangyoujin0917 commented 1 year ago

You should not contain print statement or single quote(') to get score for auto-grading.

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Your code failed to run. Please check again.

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Your code failed to run. Please check again.

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Your code failed to run. Please check again.

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Your code failed to run. Please check again.

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Your code failed to run. Please check again.

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of Jeong-jin-Han {'Jeong-jin-Han': nan}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of Jeong-jin-Han {'Jeong-jin-Han': nan}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of Jeong-jin-Han {'Jeong-jin-Han': nan}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of Jeong-jin-Han {'Jeong-jin-Han': 9.5}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of Jeong-jin-Han {'Jeong-jin-Han': 9.5}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of Jeong-jin-Han {'Jeong-jin-Han': nan}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of Jeong-jin-Han {'Jeong-jin-Han': nan}

Dongyeongkim commented 1 year ago

This issue is closed now because of the lack of progress.