vlab-kaist / NN101_23S

MIT License
6 stars 7 forks source link

[LAB] Week 3_Problem 1_이성준 #112

Closed E-pak-sa closed 1 year ago

E-pak-sa commented 1 year ago

Problem

Week 3_Problem 1

Source Code

import numpy as np
import math
import torch
import torch.nn.functional as F
import torch.optim as optim
from random import random 
from typing import Callable

##                         Problem 1                          ##
##                                                            ##
##            Arbitary x_train, y_train are given.            ##
##          In function predict(), you should return          ##
##            list y_test corresponding to x_test.            ##
##               y_train only contains 0 and 1.               ##
##             Therefore, use logstic regression.             ##
##                  Made by @jangyoujin0917                   ##
##                                                            ##

# NOTE : 1. Feel free to use torch.optim and tensor.
#        2. In this problem, we will only grade "predict" function.
#           Function "training" is only for modularization.

def training(x_train : list[list[float]], y_train : list[float]) -> tuple[list[float], float]: # DO NOT MODIFY FUNCTION NAME
    x_train = torch.FloatTensor(x_train)
    y_train = torch.FloatTensor(y_train)
    W = torch.zeros(len(x_train[0]), requires_grad= True)
    b = torch.zeros(1, requires_grad = True)

    optimizer = optim.Adam([W,b], lr=0.007)

    for epoch in range(20001):
        optimizer.zero_grad()
        hypothesis = 1 / (1 + torch.exp(-W @ torch.transpose(x_train, 0, 1) + b))
        cost = torch.nn.functional.binary_cross_entropy(hypothesis,y_train)
        cost.backward()
        optimizer.step()

    return (W,b)

def predict(x_train : list[list[float]], y_train : list[float], x_test : list[list[float]]) -> list[float]: # DO NOT MODIFY FUNCTION NAME
    x_test = torch.FloatTensor(x_test)   
    W, b = training(x_train, y_train)
    y_test = 1 / (1 + torch.exp(-W @ torch.transpose(x_test, 0, 1) + b))
    return list(y_test)

Description

logistic regression with using pytorch (normalized)

Output (Optional)

No response

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Your code failed to run. Please check again.

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of E-pak-sa {'E-pak-sa': 28.0}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of E-pak-sa Timeout reached for E-pak-sa {'E-pak-sa': 'timed_out'}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of E-pak-sa Timeout reached for E-pak-sa {'E-pak-sa': 'timed_out'}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of E-pak-sa {'E-pak-sa': 22.0}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of E-pak-sa {'E-pak-sa': 22.0}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of E-pak-sa {'E-pak-sa': 22.0}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of E-pak-sa {'E-pak-sa': 22.0}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of E-pak-sa {'E-pak-sa': 22.0}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of E-pak-sa {'E-pak-sa': 22.0}

github-actions[bot] commented 1 year ago

This is an auto-generated grading output. Checking code of E-pak-sa {'E-pak-sa': 28.0}

Dongyeongkim commented 1 year ago

This issue is closed now because of the lack of progress.