Closed rohansingh9001 closed 3 years ago
I'm not having a clear understanding of what this issue is fixing and what should be present in the prepared script. Can you explain in more layman terms?
In very simplified terms, in Machine Learning, the data you have, be it training data, validation data of weights and biases are represented by Tensors. You might have seen multi-dimension arrays used quite often in Machine Learning they are actually tensors. In fact Matrices, Vectors and Scalars are all essentially tensors. Scalars are rank 0 Tensors, Vectors rank 1, Matrices rank 2 Tensors and some multi-dimensional array of N dimensions being an N rank Tensor.
Currently, we are using raw NumPy arrays in our library, However, we might want to add more features to simple NumPy arrays.
Hence the requirement of this issue is to create a new Tensor Class and inherit the Numpy's Array class so that our Tensor class has all the functionality of Numpy arrays, but we can add more methods into this class.
You do not need to add any methods to solve this issue, pretty much a blank class with docstrings with do. For the docstring format, please refer to the current code already in the repository.
we could implement this creating a core
folder and then create a wrapper function in that then import this core wrapper function wherever required.
Hence the requirement of this issue is to create a new Tensor Class and inherit the Numpy's Array class so that our Tensor class has all the functionality of Numpy arrays, but we can add more methods into this class.
I understand we could create a class, but isn't converting a NumPy array to a tensor as simple as:
import numpy as np
import torch
numpy_arr = np.array(args)
tensor = torch.from_numpy(numpy_arr)
`class Tensor:
def __init__(self, data, requires_grad=False):
self.data = data
if not isinstance(data, np.ndarray):
self.data = np.array(data)
# whether to run backpropagation or not
self.requires_grad = requires_grad
# tensor gradient
self._grad = None
# operation if this tensor was used in it
self._grad_fn = None
@property
def shape(self):
return self.data.shape
@property
def grad_fn(self):
if not self.requires_grad:
raise Exception('This tensor is not backpropagated')
return self._grad_fn
@property
def grad(self):
return self._grad
def backward(self, grad=None):
if not self.grad_fn:
return False
if grad is None and self._grad is None:
# in case if this is last loss tensor
grad = self.__class__(1., requires_grad=False)
elif self.grad is not None:
grad = self._grad
if not self.requires_grad:
raise Exception('This tensor is not backpropagated')
self.grad_fn.backward(grad)
return True
def __str__(self):
return f'Tensor({str(self.data)})'
def add_grad(self, grad):
if self._grad is None:
self._grad = grad
else:
self._grad += grad
def __add__(self, o):
if self.data is not None:
self.data += o.data
return self
self.data = o.data
return self`
@rohansingh9001 Here is what I tried to wrap NumPy arrays as a Tensor class.
Currently, we are using all raw NumPy arrays in the directory.
However, we might need more functionality from these arrays specific to our class when we implement features like Autogradient.
For now, we want to wrap all Numpy arrays in a simple Tensor class which inherits from the NumPy array class.