Edinburgh-Chemistry-Teaching / ATCP_23_24

1 stars 0 forks source link

Typos in 01 intro to pytorch #2

Open marmatti opened 1 year ago

marmatti commented 1 year ago

image

Deep neural network built on an(?) autograd system


image

The following cell installs(?) necessary packages...


image

...and it(?) is just a generic n-dimensional array to be used...


image

We can also specify a(?) data type


image

Casting tensor x as a(?) CUDA datatype if CUDA is(?) available


image

Attention: x.to(device) will not cast x as a(?) cuda datatype - we need x = x.to(device)


image

I do not think this flows super clearly. I would rewrite: Note: As mentioned above, whenever we work with tensors, x.to(device) does not move x to cuda and we need to write x = x.to(device) instead.


image

DoubleTensor is a(?) 64-bit floating point and FloatTensor is a(?) 32-bit floating point tensor. So a FloatTensor uses half of the memory as a DoubleTensor with the same size (?) ... So PyTorch leaves it up(?) to the(?) user to choose...


image

Set a default tensor type for your notebook


image

The MNIST hand-written digits dataset consists of 60,000 training...


image

...Since we have been using scikit-learn, we will use the(?) dataset in the example here: Dataset and data set equally valid for spelling, just a suggestion to uniform.


image

Splitting the data into train and test sets


image

... This is just an example full stop "." missing


image

If np.argmax(out,axis=1)-y is non-zero, (I would move this here -->) i.e. every time there is a mismatch between prediction and label, e.g.(?) label y is 9 and prediction is 5 comma(?) then diff is incremented by 1.


image

we can't call numpy() on Tensors that require grad.


image

The optimiser module gives access to a large number of standard optimisers that try to help minimise the loss.


image

Typically for a classification problem one would use a cross entropy loss :(Colon?) the torch documentation has some more details on this.


image

This is a parameter you set in your optimiser and you can play around with different values for it.


ppxasjsm commented 1 year ago

@ryankzhu can you fix these please thanks!