bytedeco / javacpp-presets

The missing Java distribution of native C++ libraries
Other
2.68k stars 743 forks source link

[pytorch ] bug the EmbeddingImpl invoke forward method meet error for normal tensor #1301

Closed mullerhai closed 1 year ago

mullerhai commented 1 year ago

Hi, in my user defined model module ,I create some layer ,other layer can really work ,but when I add EmbeddingImpl layer defined like it

  var fc: EmbeddingImpl = register_module("fc", new EmbeddingImpl(128, 64))

when invoke the forward method, get error

val emb= fc.forward(x)

the error console log


Exception in thread "main" java.lang.RuntimeException: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got CPUFloatType instead (while checking arguments for embedding)
Exception raised from checkScalarTypes at /Users/runner/work/javacpp-presets/javacpp-presets/pytorch/cppbuild/macosx-x86_64/pytorch/aten/src/ATen/TensorUtils.cpp:203 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) + 81 (0x110a35da1 in libc10.dylib)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 98 (0x110a33492 in libc10.dylib)
frame #2: at::checkScalarTypes(char const*, at::TensorArg const&, c10::ArrayRef<c10::ScalarType>) + 792 (0x19f1a8008 in libtorch_cpu.dylib)
frame #3: at::native::embedding(at::Tensor const&, at::Tensor const&, long long, bool, bool) + 121 (0x19f5b78d9 in libtorch_cpu.dylib)
frame #4: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, at::Tensor const&, long long, bool, bool), &(at::(anonymous namespace)::(anonymous namespace)::wrapper__embedding(at::Tensor const&, at::Tensor const&, long long, bool, bool))>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, long long, bool, bool> >, at::Tensor (at::Tensor const&, at::Tensor const&, long long, bool, bool)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool) + 36 (0x1a06ba7c4 in libtorch_cpu.dylib)
frame #5: at::_ops::embedding::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool) + 132 (0x1a041af14 in libtorch_cpu.dylib)
frame #6: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool), &(torch::autograd::VariableType::(anonymous namespace)::embedding(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool))>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool> >, at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool) + 1165 (0x1a1f0fa5d in libtorch_cpu.dylib)
frame #7: at::_ops::embedding::call(at::Tensor const&, at::Tensor const&, long long, bool, bool) + 348 (0x1a03a418c in libtorch_cpu.dylib)
frame #8: torch::nn::functional::detail::embedding(at::Tensor const&, at::Tensor const&, c10::optional<long long>, c10::optional<double>, double, bool, bool) + 736 (0x1a339e920 in libtorch_cpu.dylib)
frame #9: torch::nn::EmbeddingImpl::forward(at::Tensor const&) + 63 (0x1a339e62f in libtorch_cpu.dylib)
frame #10: Java_org_bytedeco_pytorch_EmbeddingImpl_forward + 192 (0x192d699b0 in libjnitorch.dylib)
frame #11: 0x0 + 4776702960 (0x11cb6b3f0 in ???)

    at org.bytedeco.pytorch.EmbeddingImpl.forward(Native Method)
    at org.rec.pytorch.Nets.forward(ChunkMnistTrain.scala:27)
    at org.rec.pytorch.ChunkMnistTrain$.$anonfun$main$1(ChunkMnistTrain.scala:75)
    at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158)
    at org.rec.pytorch.ChunkMnistTrain$.main(ChunkMnistTrain.scala:66)
    at org.rec.pytorch.ChunkMnistTrain.main(ChunkMnistTrain.scala)
saudet commented 1 year ago

Sounds like that doesn't work with a float tensor. Please try with an integer one.

mullerhai commented 1 year ago

Sounds like that doesn't work with a float tensor. Please try with an integer one. after I cast the tensor data type to int or long


x = x.to(ScalarType.Int)
val emb= fc.forward(x)

get the error in EmbeddingImpl forward method java.lang.RuntimeException: index out of range in self

Exception in thread "main" java.lang.RuntimeException: index out of range in self
Exception raised from operator() at /Users/runner/work/javacpp-presets/javacpp-presets/pytorch/cppbuild/macosx-x86_64/pytorch/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1189 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) + 81 (0x128f10991 in libc10.dylib)
frame #1: at::native::index_select_out_cpu_(at::Tensor const&, long long, at::Tensor const&, at::Tensor&)::$_8::operator()(long long, long long) const + 472 (0x1954ab5f8 in libtorch_cpu.dylib)
frame #2: at::native::index_select_out_cpu_(at::Tensor const&, long long, at::Tensor const&, at::Tensor&) + 7514 (0x195470a5a in libtorch_cpu.dylib)
frame #3: at::native::index_select_cpu_(at::Tensor const&, long long, at::Tensor const&) + 103 (0x195490d67 in libtorch_cpu.dylib)
frame #4: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, long long, at::Tensor const&), &(at::(anonymous namespace)::(anonymous namespace)::wrapper__index_select(at::Tensor const&, long long, at::Tensor const&))>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, long long, at::Tensor const&> >, at::Tensor (at::Tensor const&, long long, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, long long, at::Tensor const&) + 23 (0x19619be37 in libtorch_cpu.dylib)
frame #5: at::_ops::index_select::call(at::Tensor const&, long long, at::Tensor const&) + 313 (0x195a210e9 in libtorch_cpu.dylib)
frame #6: at::native::embedding(at::Tensor const&, at::Tensor const&, long long, bool, bool) + 1077 (0x1951b2c95 in libtorch_cpu.dylib)
frame #7: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, at::Tensor const&, long long, bool, bool), &(at::(anonymous namespace)::(anonymous namespace)::wrapper__embedding(at::Tensor const&, at::Tensor const&, long long, bool, bool))>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, long long, bool, bool> >, at::Tensor (at::Tensor const&, at::Tensor const&, long long, bool, bool)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool) + 36 (0x1962b57c4 in libtorch_cpu.dylib)
frame #8: at::_ops::embedding::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool) + 132 (0x196015f14 in libtorch_cpu.dylib)
frame #9: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool), &(torch::autograd::VariableType::(anonymous namespace)::embedding(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool))>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool> >, at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool) + 1165 (0x197b09b7d in libtorch_cpu.dylib)
frame #10: at::_ops::embedding::call(at::Tensor const&, at::Tensor const&, long long, bool, bool) + 348 (0x195f9f18c in libtorch_cpu.dylib)
frame #11: torch::nn::functional::detail::embedding(at::Tensor const&, at::Tensor const&, c10::optional<long long>, c10::optional<double>, double, bool, bool) + 736 (0x198f99920 in libtorch_cpu.dylib)
frame #12: torch::nn::EmbeddingImpl::forward(at::Tensor const&) + 63 (0x198f9962f in libtorch_cpu.dylib)
frame #13: Java_org_bytedeco_pytorch_EmbeddingImpl_forward + 192 (0x159165cd0 in libjnitorch.dylib)
frame #14: 0x0 + 4682717832 (0x1171c9a88 in ???)

    at org.bytedeco.pytorch.EmbeddingImpl.forward(Native Method)
    at org.pytorch.layer.FeaturesLinear.forward(FeaturesLinear.scala:23)
    at org.pytorch.model.LogisticRegressionModel.forward(LogisticRegressionModel.scala:18)
    at org.pytorch.example.ObjMain$.$anonfun$main$1(main.scala:102)
    at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158)
    at org.pytorch.example.ObjMain$.main(main.scala:90)
    at org.pytorch.example.ObjMain.main(main.scala)
saudet commented 1 year ago

According to the docs here it seems like it's expecting tensors of rank 2: https://pytorch.org/docs/master/generated/torch.nn.Embedding.html#torch.nn.Embedding

HGuillemet commented 1 year ago

Your x probably contains indices that are < 0 or >= 128

mullerhai commented 1 year ago

https://pytorch.org/docs/master/generated/torch.nn.Embedding.html#torch.nn.Embedding

the input tensor shape is [500,39] ,and it is perfectly work in python

HGuillemet commented 1 year ago

Try to print x.min().item_int() and x.max().item_int() and see if they stay in the correct range.

mullerhai commented 1 year ago

x.max().item_int()

I got the result min 0 max 133

HGuillemet commented 1 year ago

your num_embeddings is 128, 133 is out of range

mullerhai commented 1 year ago

your num_embeddings is 128, 133 is out of range

I don't know why the index is bigger than the num_embeddings, I don't explicit declare the tensor index ,where cause that do you know ,thanks

mullerhai commented 1 year ago

your num_embeddings is 128, 133 is out of range

how to decrease the tensor indices to suit with the Embedding layer,

HGuillemet commented 1 year ago

How can I answer to this question ? It's your code that generate x. This has nothing to do with JavaCPP. You usually feed an embedding layer with indices you know about, like user id, or classes id. Casting it from float to int seems weird. But if it's the right thing to do for your problem, maybe you can scale the data in the process of converting it to int, with a combination of max() and mul(). Or you can rather increase the dictionary size from 128 to 134 or higher.

mullerhai commented 1 year ago

How can I answer to this question ? It's your code that generate x. This has nothing to do with JavaCPP. You usually feed an embedding layer with indices you know about, like user id, or classes id. Casting it from float to int seems weird. But if it's the right thing to do for your problem, maybe you can scale the data in the process of converting it to int, with a combination of max() and mul(). Or you can rather increase the dictionary size from 128 to 134 or higher.

I found ,every time restart the process,same input model tensor the indices are not same ,it is random?? I not edit or update the code let us analysis the console , I don't where how to control the indices size

  var fc: EmbeddingImpl = register_module("fc", new EmbeddingImpl(148, 64))
##first
FeaturesLinear before min  0 max 123
FeaturesLinear after y min  123 max 123 

##second
FeaturesLinear before min  0 max 150
Exception in thread "main" java.lang.RuntimeException: index out of range in self

##third
FeaturesLinear before min  0 max 125
FeaturesLinear after y min  125 max 125 x sum shape   two  x shape 500|32 sum yx shape 500|32|64
FeaturesLinear before min  0 max 553
Exception in thread "main" java.lang.RuntimeException: index out of range in self

##forth
FeaturesLinear before min  0 max 149
Exception in thread "main" java.lang.RuntimeException: index out of range in self

##fifth
FeaturesLinear before min  0 max 235
Exception in thread "main" java.lang.RuntimeException: index out of range in self
HGuillemet commented 1 year ago

Before training, the parameters of a module are usually initialized randomly.

mullerhai commented 1 year ago

Before training, the parameters of a module are usually initialized randomly.

excuse me, I find something confuse

one process the input tensor x.max().item_int() is max 120 ,the new EmbeddingImpl(1048, 64)) embedding_num is 1048, x.indices < embedding_num really work,

another process: the input tensor x.max().item_int() is max 69565 ,the new EmbeddingImpl(73794, 64)) embedding_num is 73794, x.indices also < embedding_num, but can not work, throw error

Exception in thread "main" java.lang.RuntimeException: index out of range in self
Exception raised from operator() at /Users/runner/work/javacpp-presets/javacpp-presets/pytorch/cppbuild/macosx-x86_64/pytorch/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1189 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) + 81 (0x109d01991 in libc10.dylib)
frame #1: at::native::index_select_out_cpu_(at::Tensor const&, long long, at::Tensor const&, at::Tensor&)::$_8::operator()(long long, long long) const + 640 (0x18a5e55a0 in libtorch_cpu.dylib)
frame #2: at::native::index_select_out_cpu_(at::Tensor const&, long long, at::Tensor const&, at::Tensor&) + 7514 (0x18a5aa95a in libtorch_cpu.dylib)
frame #3: at::native::index_select_cpu_(at::Tensor const&, long long, at::Tensor const&) + 103 (0x18a5cac67 in libtorch_cpu.dylib)
frame #4: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, long long, at::Tensor const&), &(at::(anonymous namespace)::(anonymous namespace)::wrapper__index_select(at::Tensor const&, long long, at::Tensor const&))>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, long long, at::Tensor const&> >, at::Tensor (at::Tensor const&, long long, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, long long, at::Tensor const&) + 23 (0x18b2d5d47 in libtorch_cpu.dylib)
frame #5: at::_ops::index_select::call(at::Tensor const&, long long, at::Tensor const&) + 313 (0x18ab5aff9 in libtorch_cpu.dylib)
frame #6: at::native::embedding(at::Tensor const&, at::Tensor const&, long long, bool, bool) + 1077 (0x18a2ecb95 in libtorch_cpu.dylib)
frame #7: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, at::Tensor const&, long long, bool, bool), &(at::(anonymous namespace)::(anonymous namespace)::wrapper__embedding(at::Tensor const&, at::Tensor const&, long long, bool, bool))>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, long long, bool, bool> >, at::Tensor (at::Tensor const&, at::Tensor const&, long long, bool, bool)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool) + 36 (0x18b3ef6d4 in libtorch_cpu.dylib)
frame #8: at::_ops::embedding::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool) + 132 (0x18b14fe24 in libtorch_cpu.dylib)
frame #9: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool), &(torch::autograd::VariableType::(anonymous namespace)::embedding(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool))>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool> >, at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool) + 1165 (0x18cc43a8d in libtorch_cpu.dylib)
frame #10: at::_ops::embedding::call(at::Tensor const&, at::Tensor const&, long long, bool, bool) + 348 (0x18b0d909c in libtorch_cpu.dylib)
frame #11: torch::nn::functional::detail::embedding(at::Tensor const&, at::Tensor const&, c10::optional<long long>, c10::optional<double>, double, bool, bool) + 736 (0x18e0d3910 in libtorch_cpu.dylib)
frame #12: torch::nn::EmbeddingImpl::forward(at::Tensor const&) + 63 (0x18e0d361f in libtorch_cpu.dylib)
frame #13: Java_org_bytedeco_pytorch_EmbeddingImpl_forward + 192 (0x17daa1920 in libjnitorch.dylib)
frame #14: 0x0 + 4485479048 (0x10b5afa88 in ???)
mullerhai commented 1 year ago

is there other parameter control or limit the EmbeddingImpl layer using?

HGuillemet commented 1 year ago

Did you check if the min() is >= 0 ?

mullerhai commented 1 year ago

Did you check if the min() is >= 0 ?

I found the log , min is -1 ? I do not know why cause this

mullerhai commented 1 year ago

Did you check if the min() is >= 0 ?

@HGuillemet Excuse me, If you have time eagerly waiting for your reply please ,thanks

HGuillemet commented 1 year ago

I cannot reply about why some of your values are -1: I don't know anything about how you generate tensor x and it may well not be related to JavaCPP but to your pytorch model.

mullerhai commented 1 year ago

I cannot reply about why some of your values are -1: I don't know anything about how you generate tensor x and it may well not be related to JavaCPP but to your pytorch model.

I found just cause by the [ChunkSharedBatchDataset] [ChunkRandomDataLoader ], because I print the data_loader batch data , the data min is -3 ! @saudet @HGuillemet


package org.pytorch.example

import com.github.tototoshi.csv.CSVReader
import org.bytedeco.javacpp.PointerScope
import org.bytedeco.pytorch.{ChunkDataReader, ChunkDataset, ChunkDatasetOptions, ChunkRandomDataLoader, ChunkSharedBatchDataset, DataLoaderOptions, ExampleStack, ExampleVector, RandomSampler, SGD, SGDOptions}
import org.bytedeco.pytorch.global.torch.{DeviceType, ScalarType, binary_cross_entropy, cross_entropy_loss, nll_loss, shiftLeft}
import org.pytorch.dataset.MultipleCSVRead
import org.pytorch.dataset.ReadCriteo.{statCategoryEnumCnt, transformCategoryToIndex}
import org.pytorch.model.{AutomaticFeatureInteractionModel, LogisticRegressionModel}
import org.saddle.csv
import org.bytedeco.pytorch.{Device => TorchDevice}

import java.io.File
import scala.collection.mutable.ListBuffer
import scala.io.Source
import org.bytedeco.javacpp._
import org.bytedeco.pytorch._
import org.bytedeco.pytorch.Module
import org.bytedeco.pytorch.global.torch
import org.bytedeco.pytorch.presets.torch.cout

class MullerLogisticRegressionModel(field_dims: Seq[Long], output_dim: Long = 1) extends Module {

  val fsize = field_dims.size
  val ssize = field_dims.sum
  var linear: EmbeddingImpl = register_module("linear", new EmbeddingImpl(73794, output_dim))
  var bias: Tensor = register_parameter("bias", torch.zeros(output_dim))
  var offsets: Array[Long] = field_dims.map(_.toInt).scanLeft(0)((ep, et) => ep + et).dropRight(1).map(_.toLong).toArray

//  val linear = new FeaturesLinear(field_dims)
//  register_module("linear", linear)

  def forward(xs: Tensor): Tensor = {

    var x = xs
    println(s"first x  ${ x.min().item_int()} max ${x.max().item_int()}  xs ${ xs.min().item_int()} max ${xs.max().item_int()}    shape ${x.shape().mkString("|")}") //first x shape 500|39
    println(s"FeaturesLinear first x ${ x.min().item_int()} max ${x.max().item_int()}    shape ${x.shape().mkString("|")}") //500|39
    val offsetsTensor = AbstractTensor.create(this.offsets: _*).unsqueeze(0)
    println(s"FeaturesLinear offsetsTensor shape ${offsetsTensor.shape().mkString("|")}") //offsetsTensor shape 1|39
    x = x.add(offsetsTensor).to(ScalarType.Long)

    println(s"FeaturesLinear second min  ${ x.min().item_int()} max ${x.max().item_int()}   emb 73794  sum emb ${ssize}  emb size ${fsize} x shape   ${x.shape().mkString("|")}") //500|39

    x  = this.linear.forward(x)

    println(s"FeaturesLinear third x ${ x.min().item_int()} max ${x.max().item_int()}    shape ${x.shape().mkString("|")}") //x 500,39,1
    x =  torch.sum(x,1)  //x (500,1)
    println(s"FeaturesLinear sum  shape ${x.shape().mkString("|")}")
    x = x.add(this.bias) // x
    println(s"second x shape ${x.shape().mkString("|")}") //(500
    x = x.squeeze(1)
    println(s"third x shape ${x.shape().mkString("|")}") //(500)
    x = torch.sigmoid(x)
    println(s"forth x shape ${x.shape().mkString("|")}")
    x
  }

}
object ObjMain {

  def dealDataWithSaddle(path:String ="/Users/zhanghaining/Downloads/criteo-small-train_1m.txt"):Unit ={

    val file = new File(path)
    val irisURL = "https://gist.githubusercontent.com/pityka/d05bb892541d71c2a06a0efb6933b323/raw/639388c2cbc2120a14dcf466e85730eb8be498bb/iris.csv"
    // irisURL: String = "https://gist.githubusercontent.com/pityka/d05bb892541d71c2a06a0efb6933b323/raw/639388c2cbc2120a14dcf466e85730eb8be498bb/iris.csv"
//    val iris = csv.CsvParser.parseSourceWithHeader[Double](
//      source = Source.fromURL(irisURL),
//      cols = List(0,1,2,3),
//      recordSeparator = "\n").toOption.get
    //    csv.CsvParser.parseSourceWithHeader()

    val data = csv.CsvParser.parseFileWithHeader[String](file)
    data.right

  }

  def dealWithCsvReader(criteoPath:String ="/Users/zhanghaining/Downloads/criteo-small-train_1m.txt") :Unit={
    val criteoData= CSVReader.open(criteoPath)
    val featureEnumIndexMapList = statCategoryEnumCnt(criteoData)
    val criteoData2= CSVReader.open(criteoPath)
    val featureIndexBuffer = transformCategoryToIndex(criteoData2,featureEnumIndexMapList)
  }
  @throws[Exception]
  def main(args: Array[String]): Unit = {
    try {
      val scope = new PointerScope
      System.setProperty("org.bytedeco.openblas.load", "mkl")
      val criteoPath ="/Users/zhanghaining/Downloads/criteo-small-train_1m.txt"
      val file = new File(criteoPath)
//      val data = csv.CsvParser.parseFileWithHeader[Double](file)
//      data.right
//      dealDataWithSaddle(criteoPath)

      val threadNum: Int = 10
      val minThreshold = 10
      val multiRead = new MultipleCSVRead(criteoPath,threadNum)
      val exampleVector = multiRead.readFileToExampleVector(multiRead.filePath,threadNum = multiRead.threadNum,minThreshold = multiRead.minThreshold)
      val dataBuffer = new ListBuffer[( Seq[Float],Seq[String],Float)]()
      val field_dims = Seq[Long](35,82,78,31,209,91,64,36,79,8,27,29,36,321,504,6373,7426,104,13,7100,168,4,7371,3663,6495,2800,27,4172,6722,11,2162,1130, 5,6554,11,15,5506, 48,4284)
      println(s"dataBuffer exampleVector ${exampleVector.size()} field_dims sum ${field_dims.sum} ") //73794
      val model = new MullerLogisticRegressionModel(field_dims)
//      val model = new AutomaticFeatureInteractionModel(
//        field_dims, embed_dim=16, atten_embed_dim=64, num_heads=2, num_layers=3, mlp_dims=Seq(400, 400), dropouts=Seq(0d, 0d, 0d))

      try{
        val batch_size = 500 // 256 // 64
//        val net = new Nets
        val prefetch_count = 1
//        val dataExample =  readMnist() //  readEXample() //
//        val optimizer = new SGD(model.parameters, new SGDOptions(/*lr=*/ 0.01))
        val optimizer = new Adam(model.parameters,new AdamOptions(/*lr=*/ 0.01))
        val device :TorchDevice =new TorchDevice(DeviceType.CPU)
        val data_reader = new ChunkDataReader() {
          override def read_chunk(chunk_index: Long): ExampleVector = {
//            new ExampleVector(dataExample: _*)
            exampleVector
          }
          override def chunk_count: Long = exampleVector.size()
          override def reset(): Unit = {
            //do not write anything here !!! not inherit virtual super method
          }
        }
        val sampler = new RandomSampler(0)
        //        val  sam = new SequentialSampler(0)
        val data_set = new ChunkSharedBatchDataset(new ChunkDataset(data_reader, sampler, sampler, new ChunkDatasetOptions(prefetch_count, batch_size))).map(new ExampleStack)
        val data_loader = new ChunkRandomDataLoader(data_set, new DataLoaderOptions(batch_size))
        for (epoch <- 1 to 10) {
          var it = data_loader.begin
          var batch_index =0
          while ( {
            !it.equals(data_loader.end)
          }) {

            val batch = it.access
            optimizer.zero_grad()
            val target =batch.target
            val data = batch.data
            println(s"data shape : ${data.shape().mkString(" | ")} data min ${ data.min().item_int()} max ${data.max().item_int()}   target shape: ${target.shape().mkString("|")}")
            val prediction = model.forward(data)

            val squeezeTarget = target.squeeze(1).to(device,ScalarType.Long)
            val  loss = binary_cross_entropy(prediction,squeezeTarget)
//            val loss = nll_loss(prediction,squeezeTarget)
            loss.backward()
            optimizer.step
            if ( {
              batch_index += 1;
              batch_index
            } % 1000 == 0) {
              System.out.println("Epoch: " + epoch + " | Batch: " + batch_index + " | Loss: " + loss.item_float)
              // Serialize your model periodically as a checkpoint.
              //                val archive = new OutputArchive
              //                net.save(archive)
              //                archive.save_to("net.pt")
            }
            //            println(s"batch.data.createIndexer  ${batch.data.createIndexer}  batch.target.createIndexer ${batch.target.createIndexer}")

            it = it.increment
          }
        }
      }finally if (scope != null) scope.close()
    }
  }

}

console log

/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/bin/java -javaagent:/Applications/IntelliJ IDEA.app/Contents/lib/idea_rt.jar=57473:/Applications/IntelliJ IDEA.app/Contents/bin -Dfile.encoding=UTF-8 -classpath /Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/jre/lib/charsets.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/jre/lib/deploy.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/jre/lib/ext/cldrdata.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/jre/lib/ext/dnsns.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/jre/lib/ext/jaccess.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/jre/lib/ext/jfxrt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/jre/lib/ext/localedata.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/jre/lib/ext/nashorn.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/jre/lib/ext/sunec.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/jre/lib/ext/sunjce_provider.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/jre/lib/ext/sunpkcs11.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/jre/lib/ext/zipfs.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/jre/lib/javaws.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/jre/lib/jce.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/jre/lib/jfr.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/jre/lib/jfxswt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/jre/lib/jsse.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/jre/lib/management-agent.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/jre/lib/plugin.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/jre/lib/resources.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/jre/lib/rt.jar:/Users/zhanghaining/Documents/codeWorld/pytorch-scala-fm/target/scala-2.12/classes:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/beachape/enumeratum-macros_2.12/1.6.1/enumeratum-macros_2.12-1.6.1.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/beachape/enumeratum_2.12/1.7.0/enumeratum_2.12-1.7.0.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/fasterxml/jackson/core/jackson-core/2.13.2/jackson-core-2.13.2.jar:/Users/zhanghaining/.m2/repository/com/github/fommil/netlib/core/1.1.2/core-1.1.2.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/github/fommil/netlib/native_ref-java/1.1/native_ref-java-1.1.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/github/fommil/netlib/native_system-java/1.1/native_system-java-1.1.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/github/fommil/netlib/netlib-native_ref-linux-armhf/1.1/netlib-native_ref-linux-armhf-1.1-natives.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/github/fommil/netlib/netlib-native_ref-linux-i686/1.1/netlib-native_ref-linux-i686-1.1-natives.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/github/fommil/netlib/netlib-native_ref-linux-x86_64/1.1/netlib-native_ref-linux-x86_64-1.1-natives.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/github/fommil/netlib/netlib-native_ref-osx-x86_64/1.1/netlib-native_ref-osx-x86_64-1.1-natives.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/github/fommil/netlib/netlib-native_ref-win-i686/1.1/netlib-native_ref-win-i686-1.1-natives.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/github/fommil/netlib/netlib-native_ref-win-x86_64/1.1/netlib-native_ref-win-x86_64-1.1-natives.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/github/fommil/netlib/netlib-native_system-linux-armhf/1.1/netlib-native_system-linux-armhf-1.1-natives.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/github/fommil/netlib/netlib-native_system-linux-i686/1.1/netlib-native_system-linux-i686-1.1-natives.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/github/fommil/netlib/netlib-native_system-linux-x86_64/1.1/netlib-native_system-linux-x86_64-1.1-natives.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/github/fommil/netlib/netlib-native_system-osx-x86_64/1.1/netlib-native_system-osx-x86_64-1.1-natives.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/github/fommil/netlib/netlib-native_system-win-i686/1.1/netlib-native_system-win-i686-1.1-natives.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/github/fommil/netlib/netlib-native_system-win-x86_64/1.1/netlib-native_system-win-x86_64-1.1-natives.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/github/fommil/jniloader/1.1/jniloader-1.1.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/github/tototoshi/scala-csv_2.12/1.3.8/scala-csv_2.12-1.3.8.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/googlecode/combinatoricslib/combinatoricslib/2.3/combinatoricslib-2.3.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/lihaoyi/geny_2.12/0.6.10/geny_2.12-0.6.10.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/lihaoyi/ujson_2.12/1.4.2/ujson_2.12-1.4.2.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/lihaoyi/upickle-core_2.12/1.4.2/upickle-core_2.12-1.4.2.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/typesafe/scala-logging/scala-logging_2.12/3.9.4/scala-logging_2.12-3.9.4.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/typesafe/config/1.4.2/config-1.4.2.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/commons-io/commons-io/2.11.0/commons-io-2.11.0.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/io/circe/circe-core_2.12/0.14.1/circe-core_2.12-0.14.1.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/io/circe/circe-numbers_2.12/0.14.1/circe-numbers_2.12-0.14.1.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/io/github/pityka/saddle-binary_2.12/3.3.0/saddle-binary_2.12-3.3.0.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/io/github/pityka/saddle-circe_2.12/3.3.0/saddle-circe_2.12-3.3.0.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/io/github/pityka/saddle-core_2.12/3.3.0/saddle-core_2.12-3.3.0.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/io/github/pityka/saddle-io_2.12/3.3.0/saddle-io_2.12-3.3.0.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/io/github/pityka/saddle-linalg_2.12/3.3.0/saddle-linalg_2.12-3.3.0.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/io/github/pityka/saddle-ops-inlined-macroimpl_2.12/3.3.0/saddle-ops-inlined-macroimpl_2.12-3.3.0.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/io/github/pityka/saddle-ops-inlined_2.12/3.3.0/saddle-ops-inlined_2.12-3.3.0.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/io/github/pityka/saddle-spire-prng_2.12/3.3.0/saddle-spire-prng_2.12-3.3.0.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/io/github/pityka/saddle-stats_2.12/3.3.0/saddle-stats_2.12-3.3.0.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/io/github/pityka/saddle-time_2.12/3.3.0/saddle-time_2.12-3.3.0.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/io/spray/spray-json_2.12/1.3.6/spray-json_2.12-1.3.6.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/joda-time/joda-time/2.1/joda-time-2.1.jar:/Users/zhanghaining/.m2/repository/junit/junit/4.8.2/junit-4.8.2.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/net/ericaro/neoitertools/1.0.0/neoitertools-1.0.0.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/net/sourceforge/csvjdbc/csvjdbc/1.0.37/csvjdbc-1.0.37.jar:/Users/zhanghaining/.m2/repository/net/sourceforge/f2j/arpack_combined_all/0.1/arpack_combined_all-0.1.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/apache/commons/commons-compress/1.21/commons-compress-1.21.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/apache/commons/commons-csv/1.9.0/commons-csv-1.9.0.jar:/Users/zhanghaining/.m2/repository/org/apache/commons/commons-lang3/3.12.0/commons-lang3-3.12.0.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/bytedeco/cpython-platform/3.10.8-1.5.8/cpython-platform-3.10.8-1.5.8.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/bytedeco/cpython/3.10.8-1.5.8/cpython-3.10.8-1.5.8.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/bytedeco/cpython/3.10.8-1.5.8/cpython-3.10.8-1.5.8-linux-arm64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/bytedeco/cpython/3.10.8-1.5.8/cpython-3.10.8-1.5.8-linux-armhf.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/bytedeco/cpython/3.10.8-1.5.8/cpython-3.10.8-1.5.8-linux-ppc64le.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/bytedeco/cpython/3.10.8-1.5.8/cpython-3.10.8-1.5.8-linux-x86.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/bytedeco/cpython/3.10.8-1.5.8/cpython-3.10.8-1.5.8-linux-x86_64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/bytedeco/cpython/3.10.8-1.5.8/cpython-3.10.8-1.5.8-macosx-x86_64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/bytedeco/cpython/3.10.8-1.5.8/cpython-3.10.8-1.5.8-windows-x86.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/bytedeco/cpython/3.10.8-1.5.8/cpython-3.10.8-1.5.8-windows-x86_64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/javacpp-platform/1.5.9-SNAPSHOT/javacpp-platform-1.5.9-20221229.234425-119.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/javacpp/1.5.9-SNAPSHOT/javacpp-1.5.9-20221203.124328-76-android-arm64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/javacpp/1.5.9-SNAPSHOT/javacpp-1.5.9-20221203.124328-76-android-arm.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/javacpp/1.5.9-SNAPSHOT/javacpp-1.5.9-20221203.124328-76-android-x86.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/javacpp/1.5.9-SNAPSHOT/javacpp-1.5.9-20221203.124328-76-android-x86_64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/javacpp/1.5.9-SNAPSHOT/javacpp-1.5.9-20221203.124328-76-ios-arm64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/javacpp/1.5.9-SNAPSHOT/javacpp-1.5.9-20221203.124328-76-ios-x86_64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/javacpp/1.5.9-SNAPSHOT/javacpp-1.5.9-20221203.124328-76.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/javacpp/1.5.9-SNAPSHOT/javacpp-1.5.9-20221203.124328-76-linux-arm64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/javacpp/1.5.9-SNAPSHOT/javacpp-1.5.9-20221203.124328-76-linux-armhf.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/javacpp/1.5.9-SNAPSHOT/javacpp-1.5.9-20221203.124328-76-linux-ppc64le.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/javacpp/1.5.9-SNAPSHOT/javacpp-1.5.9-20221203.124328-76-linux-x86.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/javacpp/1.5.9-SNAPSHOT/javacpp-1.5.9-20221203.124328-76-linux-x86_64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/javacpp/1.5.9-SNAPSHOT/javacpp-1.5.9-20221203.124328-76-macosx-arm64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/javacpp/1.5.9-SNAPSHOT/javacpp-1.5.9-20221203.124328-76-macosx-x86_64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/javacpp/1.5.9-SNAPSHOT/javacpp-1.5.9-20221203.124328-76-windows-x86.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/javacpp/1.5.9-SNAPSHOT/javacpp-1.5.9-20221203.124328-76-windows-x86_64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/bytedeco/numpy-platform/1.23.4-1.5.8/numpy-platform-1.23.4-1.5.8.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/bytedeco/numpy/1.23.4-1.5.8/numpy-1.23.4-1.5.8.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/bytedeco/numpy/1.23.4-1.5.8/numpy-1.23.4-1.5.8-linux-arm64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/bytedeco/numpy/1.23.4-1.5.8/numpy-1.23.4-1.5.8-linux-armhf.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/bytedeco/numpy/1.23.4-1.5.8/numpy-1.23.4-1.5.8-linux-ppc64le.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/bytedeco/numpy/1.23.4-1.5.8/numpy-1.23.4-1.5.8-linux-x86.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/bytedeco/numpy/1.23.4-1.5.8/numpy-1.23.4-1.5.8-linux-x86_64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/bytedeco/numpy/1.23.4-1.5.8/numpy-1.23.4-1.5.8-macosx-x86_64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/bytedeco/numpy/1.23.4-1.5.8/numpy-1.23.4-1.5.8-windows-x86.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/bytedeco/numpy/1.23.4-1.5.8/numpy-1.23.4-1.5.8-windows-x86_64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/openblas-platform/0.3.21-1.5.9-SNAPSHOT/openblas-platform-0.3.21-1.5.9-20221230.155817-37.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/openblas/0.3.21-1.5.9-SNAPSHOT/openblas-0.3.21-1.5.9-20221104.072840-16-android-arm64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/openblas/0.3.21-1.5.9-SNAPSHOT/openblas-0.3.21-1.5.9-20221104.072840-16-android-arm.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/openblas/0.3.21-1.5.9-SNAPSHOT/openblas-0.3.21-1.5.9-20221104.072840-16-android-x86.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/openblas/0.3.21-1.5.9-SNAPSHOT/openblas-0.3.21-1.5.9-20221104.072840-16-android-x86_64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/openblas/0.3.21-1.5.9-SNAPSHOT/openblas-0.3.21-1.5.9-20221104.072840-16-ios-arm64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/openblas/0.3.21-1.5.9-SNAPSHOT/openblas-0.3.21-1.5.9-20221104.072840-16-ios-x86_64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/openblas/0.3.21-1.5.9-SNAPSHOT/openblas-0.3.21-1.5.9-20221104.072840-16.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/openblas/0.3.21-1.5.9-SNAPSHOT/openblas-0.3.21-1.5.9-20221104.072840-16-linux-arm64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/openblas/0.3.21-1.5.9-SNAPSHOT/openblas-0.3.21-1.5.9-20221104.072840-16-linux-armhf.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/openblas/0.3.21-1.5.9-SNAPSHOT/openblas-0.3.21-1.5.9-20221104.072840-16-linux-ppc64le.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/openblas/0.3.21-1.5.9-SNAPSHOT/openblas-0.3.21-1.5.9-20221104.072840-16-linux-x86.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/openblas/0.3.21-1.5.9-SNAPSHOT/openblas-0.3.21-1.5.9-20221104.072840-16-linux-x86_64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/openblas/0.3.21-1.5.9-SNAPSHOT/openblas-0.3.21-1.5.9-20221104.072840-16-macosx-arm64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/openblas/0.3.21-1.5.9-SNAPSHOT/openblas-0.3.21-1.5.9-20221104.072840-16-macosx-x86_64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/openblas/0.3.21-1.5.9-SNAPSHOT/openblas-0.3.21-1.5.9-20221104.072840-16-windows-x86.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/openblas/0.3.21-1.5.9-SNAPSHOT/openblas-0.3.21-1.5.9-20221104.072840-16-windows-x86_64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/pytorch-platform/1.13.1-1.5.9-SNAPSHOT/pytorch-platform-1.13.1-1.5.9-20230103.130207-4.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/pytorch/1.13.1-1.5.9-SNAPSHOT/pytorch-1.13.1-1.5.9-20230103.201959-18.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/pytorch/1.13.1-1.5.9-SNAPSHOT/pytorch-1.13.1-1.5.9-20230103.201959-18-linux-x86_64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/pytorch/1.13.1-1.5.9-SNAPSHOT/pytorch-1.13.1-1.5.9-20230103.201959-18-macosx-x86_64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/oss.sonatype.org/content/repositories/snapshots/org/bytedeco/pytorch/1.13.1-1.5.9-SNAPSHOT/pytorch-1.13.1-1.5.9-20230103.201959-18-windows-x86_64.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/joda/joda-convert/1.2/joda-convert-1.2.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/scala-lang/modules/scala-collection-compat_2.12/2.7.0/scala-collection-compat_2.12-2.7.0.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/scala-lang/scala-library/2.12.11/scala-library-2.12.11.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/scala-lang/scala-reflect/2.12.11/scala-reflect-2.12.11.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/scala-saddle/google-rfc-2445/20110304/google-rfc-2445-20110304.jar:/Users/zhanghaining/.m2/repository/org/slf4j/slf4j-api/1.7.30/slf4j-api-1.7.30.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/typelevel/cats-core_2.12/2.6.1/cats-core_2.12-2.6.1.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/typelevel/cats-kernel_2.12/2.6.1/cats-kernel_2.12-2.6.1.jar:/Users/zhanghaining/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/typelevel/simulacrum-scalafix-annotations_2.12/0.5.4/simulacrum-scalafix-annotations_2.12-0.5.4.jar org.pytorch.example.ObjMain
Splitting file criteo-small-train_1m.txt into 100 chunks of (at most) 10K records each took 1316
records from CSV file in_chunk_9.csv frame count 10000  took 1358
records from CSV file in_chunk_8.csv frame count 10000  took 1358
records from CSV file in_chunk_7.csv frame count 10000  took 1357
records from CSV file in_chunk_6.csv frame count 10000  took 1357
records from CSV file in_chunk_1.csv frame count 10000  took 1357
records from CSV file in_chunk_5.csv frame count 10000  took 1357
records from CSV file in_chunk_0.csv frame count 10000  took 1358
records from CSV file in_chunk_3.csv frame count 10000  took 1358
records from CSV file in_chunk_2.csv frame count 10000  took 1357
records from CSV file in_chunk_4.csv frame count 10000  took 1356
recordCnt 100000 allFrame row  100000 col 40
numberFeatureElementMedianMap index 1 median 1.0
numberFeatureElementMedianMap index 2 median 3.0
numberFeatureElementMedianMap index 3 median 8.0
numberFeatureElementMedianMap index 4 median 5.0
numberFeatureElementMedianMap index 5 median 2213.0
numberFeatureElementMedianMap index 6 median 37.0
numberFeatureElementMedianMap index 7 median 3.0
numberFeatureElementMedianMap index 8 median 8.0
numberFeatureElementMedianMap index 9 median 40.0
numberFeatureElementMedianMap index 10 median 1.0
numberFeatureElementMedianMap index 11 median 1.0
numberFeatureElementMedianMap index 12 median 0.0
numberFeatureElementMedianMap index 13 median 5.0
categoryFeatureElementCntMap generate complete featureDims 31|135|55|84|17|8|145|21|3|93|190|62|201|15|200|81|10|167|40|5|73|7|13|90|26|64|154|2694|945|137|23043|2057|630|157|1944|9|88|73|275
categoryFeatureElementIndexMap generate complete 
Example Generate Count 0 already took 2168
Example Generate Count 10000 already took 3308
Example Generate Count 20000 already took 4203
Example Generate Count 30000 already took 5104
Example Generate Count 40000 already took 6077
Example Generate Count 50000 already took 6980
Example Generate Count 60000 already took 7875
Example Generate Count 70000 already took 8766
Example Generate Count 80000 already took 9651
Example Generate Count 90000 already took 10525
final exampleVector 100000
exampleVector 100000
dataBuffer exampleVector 100000 field_dims sum 73794 
data shape : 500 | 39 data min -3 max 196   target shape: 500|1
first x  -3 max 196  xs -3 max 196    shape 500|39
FeaturesLinear first x -3 max 196    shape 500|39
FeaturesLinear offsetsTensor shape 1|39
FeaturesLinear second min  -1 max 69534   emb 73794  sum emb 73794  emb size 39 x shape   500|39
Exception in thread "main" java.lang.RuntimeException: index out of range in self
Exception raised from operator() at /Users/runner/work/javacpp-presets/javacpp-presets/pytorch/cppbuild/macosx-x86_64/pytorch/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1189 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) + 81 (0x12b188991 in libc10.dylib)
frame #1: at::native::index_select_out_cpu_(at::Tensor const&, long long, at::Tensor const&, at::Tensor&)::$_8::operator()(long long, long long) const + 640 (0x1977235a0 in libtorch_cpu.dylib)
frame #2: at::native::index_select_out_cpu_(at::Tensor const&, long long, at::Tensor const&, at::Tensor&) + 7514 (0x1976e895a in libtorch_cpu.dylib)
frame #3: at::native::index_select_cpu_(at::Tensor const&, long long, at::Tensor const&) + 103 (0x197708c67 in libtorch_cpu.dylib)
frame #4: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, long long, at::Tensor const&), &(at::(anonymous namespace)::(anonymous namespace)::wrapper__index_select(at::Tensor const&, long long, at::Tensor const&))>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, long long, at::Tensor const&> >, at::Tensor (at::Tensor const&, long long, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, long long, at::Tensor const&) + 23 (0x198413d47 in libtorch_cpu.dylib)
frame #5: at::_ops::index_select::call(at::Tensor const&, long long, at::Tensor const&) + 313 (0x197c98ff9 in libtorch_cpu.dylib)
frame #6: at::native::embedding(at::Tensor const&, at::Tensor const&, long long, bool, bool) + 1077 (0x19742ab95 in libtorch_cpu.dylib)
frame #7: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, at::Tensor const&, long long, bool, bool), &(at::(anonymous namespace)::(anonymous namespace)::wrapper__embedding(at::Tensor const&, at::Tensor const&, long long, bool, bool))>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, long long, bool, bool> >, at::Tensor (at::Tensor const&, at::Tensor const&, long long, bool, bool)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool) + 36 (0x19852d6d4 in libtorch_cpu.dylib)
frame #8: at::_ops::embedding::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool) + 132 (0x19828de24 in libtorch_cpu.dylib)
frame #9: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool), &(torch::autograd::VariableType::(anonymous namespace)::embedding(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool))>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool> >, at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long long, bool, bool) + 1165 (0x199d81a8d in libtorch_cpu.dylib)
frame #10: at::_ops::embedding::call(at::Tensor const&, at::Tensor const&, long long, bool, bool) + 348 (0x19821709c in libtorch_cpu.dylib)
frame #11: torch::nn::functional::detail::embedding(at::Tensor const&, at::Tensor const&, c10::optional<long long>, c10::optional<double>, double, bool, bool) + 736 (0x19b211910 in libtorch_cpu.dylib)
frame #12: torch::nn::EmbeddingImpl::forward(at::Tensor const&) + 63 (0x19b21161f in libtorch_cpu.dylib)
frame #13: Java_org_bytedeco_pytorch_EmbeddingImpl_forward + 192 (0x15b3de920 in libjnitorch.dylib)
frame #14: 0x0 + 4576487015 (0x110c7a667 in ???)
frame #15: 0x0 + 4576419904 (0x110c6a040 in ???)
frame #16: 0x0 + 4576419904 (0x110c6a040 in ???)
frame #17: 0x0 + 4576420541 (0x110c6a2bd in ???)

    at org.bytedeco.pytorch.EmbeddingImpl.forward(Native Method)
    at org.pytorch.example.MullerLogisticRegressionModel.forward(main.scala:44)
    at org.pytorch.example.ObjMain$.$anonfun$main$1(main.scala:139)
    at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158)
    at org.pytorch.example.ObjMain$.main(main.scala:127)
    at org.pytorch.example.ObjMain.main(main.scala)

Process finished with exit code 1
HGuillemet commented 1 year ago

If you think there is a problem with the chunk data reader, could you please minimize your example so that it doesn't have other dependencies, only performs the reading, and from a minimal data file ?

HGuillemet commented 1 year ago

Is this issue solved and can be closed ?