mlverse / torch

R Interface to Torch
https://torch.mlverse.org
Other
490 stars 64 forks source link

A simple example of training a neural network for a noob #1092

Closed mytarmail closed 1 year ago

mytarmail commented 1 year ago

Hello! I am trying unsuccessfully to learn torch, I just looked at a lot of code examples, but they are all quite overloaded with different data transformations and other chips. This is very annoying for me as a complete newbie. It's strange, but I never found a simple and clear example of binary classification. I ask if anyone has the desire and time, please write an example for me to train a conventional neural network.

Here is my initial data

d <- rbind( matrix(rnorm(2500,mean =  1), ncol = 5,nrow = 500),
            matrix(rnorm(2500,mean =0.2), ncol = 5,nrow = 500))
d <- cbind.data.frame(target= as.factor(c(rep(1,500), rep(-1,500))), x= d)

library(torch)
library(luz)
library(scorecard)

x <- d |> split_df(ratios = c(0.7, 0.3), name_dfs = c('train', 'test'))

my_dataset <- dataset(
  name = "my_dataset",
  initialize = function(x) {
    self$y <- x$target |> torch_tensor()
    self$x <- x[,-1] |> as.matrix() |> torch_tensor() },

  .getitem = function(i) list(x = self$x[i, ], y = self$y[i]),
  .length = function() self$y$size()[[1]]
)

train_dl <- x$train |> my_dataset() |> dataloader(batch_size = 5, shuffle = TRUE)
test_dl   <- x$test |> my_dataset() |> dataloader(batch_size = 5, shuffle = FALSE)

-----------
# neural network initialization for binary classification
# neural network training  and print  train, test errors
-----------

I don’t care about the number of layers, I don’t need to do preprocessing, etc., it’s important for me to understand the learning code itself. Thanks.

dfalbel commented 1 year ago

Hi @mytarmail

I suggest the first chapter of 'Deep Learning and Scientific Computing with R torch.

Here you can find an example of a full training loop: https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/network_2.html using lower level torch.

The next chapter talks about luz which allows you to abstract the training loop, which can also be helpful if you think code in chapter 1 is too overloaded.