Closed mullerhai closed 1 year ago
I'm sure @HGuillemet or @sbrunk have some sample code for that somewhere?
I don't. I use my own utility classes for this kind of features.
But do not use the Example(Pointer)
constructor. It's a special construct available in all JavaCPP classes for pointer casting.
Try with Example(Tensor data, Tensor target)
.
Same for ExampleStack
. I guess you should use something like:
ExampleVector ev = new ExampleVector(132);
ev.put(ex1);
ev.put(ex2);
...
ExampleStack es = new ExampleStack();
Example stack = es.apply_batch(ev);
But then I don't think you can use an iterator on a stack. A stack is an example built from an array of examples. You need the dataset api for iterating over examples.
Same for
ExampleStack
. I guess you should use something like:ExampleVector ev = new ExampleVector(132); ev.put(ex1); ev.put(ex2); ... ExampleStack es = new ExampleStack(); Example stack = es.apply_batch(ev);
But then I don't think you can use an iterator on a stack. A stack is an example built from an array of examples. You need the dataset api for iterating over examples.
thanks ,just I want
let me see, the leave question [ExampleIterator] how to use ?
@mullerhai have a look at MNIST example from @saudet on how you can use the ExampleIterator
:
Or in Scala it would roughly look like this:
var it = dataLoader.begin()
while (!it.equals(dataLoader.end())) {
val batch = it.access
// do training step
it = it.increment()
}
Example(Tensor data, Tensor target)
.
Example(Tensor data, Tensor target). meet compile error , and Example not have this constructor , how to do that?
Same for
ExampleStack
. I guess you should use something like:ExampleVector ev = new ExampleVector(132); ev.put(ex1); ev.put(ex2); ... ExampleStack es = new ExampleStack(); Example stack = es.apply_batch(ev);
But then I don't think you can use an iterator on a stack. A stack is an example built from an array of examples. You need the dataset api for iterating over examples.
I need you help, I meet many error ,please show me one completable code template from tensor create to Example ,thanks
import org.bytedeco.pytorch.global.torch
import org.bytedeco.pytorch.Example
import org.bytedeco.pytorch.global.torch.DeviceType
import org.bytedeco.pytorch.{Device => TorchDevice}
val device: TorchDevice = new TorchDevice(DeviceType.CPU) //("cpu") // torch.device_or_default(DeviceOptional("cpu")) // torch.device("cpu")
var tensor = torch.randn(34, 34)
tensor.to(device, tensor.scalar_type().intern())
val tensor2 = torch.randn(34, 34)
tensor2.to(device, tensor.scalar_type().intern())
import org.bytedeco.pytorch.{ExampleStack, ExampleVector}
val ev = new ExampleVector(132)
// ev.put(tensor) // will meeet crash
// ev.put(tensor2) //
val es = new ExampleStack
val stack: Example = es.apply_batch(ev)
stack.data(tensor) //will crash main" java.lang.RuntimeException: tensor does not have a device
stack.target(tensor2) //will main" java.lang.RuntimeException: tensor does not have a device
error console
Exception in thread "main" java.lang.RuntimeException: tensor does not have a device
Exception raised from device at /Users/runner/work/javacpp-presets/javacpp-presets/pytorch/cppbuild/macosx-x86_64/pytorch/c10/core/TensorImpl.h:902 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) + 81 (0x109acb801 in libc10.dylib)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 197 (0x109ac9025 in libc10.dylib)
frame #2: at::TensorBase::options() const + 155 (0x172ac52eb in libtorch_cpu.dylib)
frame #3: at::meta::structured_cat::meta(c10::IListRef<at::Tensor>, long long) + 780 (0x1730c373c in libtorch_cpu.dylib)
frame #4: at::(anonymous namespace)::wrapper_cat(c10::ArrayRef<at::Tensor>, long long) + 91 (0x173c8fa2b in libtorch_cpu.dylib)
frame #5: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::ArrayRef<at::Tensor>, long long), &(at::(anonymous namespace)::wrapper_cat(c10::ArrayRef<at::Tensor>, long long))>, at::Tensor, c10::guts::typelist::typelist<c10::ArrayRef<at::Tensor>, long long> >, at::Tensor (c10::ArrayRef<at::Tensor>, long long)>::call(c10::OperatorKernel*, c10::DispatchKeySet, c10::ArrayRef<at::Tensor>, long long) + 23 (0x173c8f9b7 in libtorch_cpu.dylib)
frame #6: at::_ops::cat::call(c10::ArrayRef<at::Tensor>, long long) + 498 (0x173490e92 in libtorch_cpu.dylib)
frame #7: at::native::stack(c10::ArrayRef<at::Tensor>, long long) + 332 (0x1730dbe5c in libtorch_cpu.dylib)
frame #8: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::ArrayRef<at::Tensor>, long long), &(at::(anonymous namespace)::(anonymous namespace)::wrapper__stack(c10::ArrayRef<at::Tensor>, long long))>, at::Tensor, c10::guts::typelist::typelist<c10::ArrayRef<at::Tensor>, long long> >, at::Tensor (c10::ArrayRef<at::Tensor>, long long)>::call(c10::OperatorKernel*, c10::DispatchKeySet, c10::ArrayRef<at::Tensor>, long long) + 23 (0x173ccbef7 in libtorch_cpu.dylib)
frame #9: at::_ops::stack::redispatch(c10::DispatchKeySet, c10::ArrayRef<at::Tensor>, long long) + 102 (0x1736ca796 in libtorch_cpu.dylib)
frame #10: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, c10::ArrayRef<at::Tensor>, long long), &(torch::autograd::VariableType::(anonymous namespace)::stack(c10::DispatchKeySet, c10::ArrayRef<at::Tensor>, long long))>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, c10::ArrayRef<at::Tensor>, long long> >, at::Tensor (c10::DispatchKeySet, c10::ArrayRef<at::Tensor>, long long)>::call(c10::OperatorKernel*, c10::DispatchKeySet, c10::ArrayRef<at::Tensor>, long long) + 799 (0x174dfdd4f in libtorch_cpu.dylib)
frame #11: at::_ops::stack::call(c10::ArrayRef<at::Tensor>, long long) + 498 (0x173652932 in libtorch_cpu.dylib)
frame #12: torch::data::transforms::Stack<torch::data::Example<at::Tensor, at::Tensor> >::apply_batch(std::__1::vector<torch::data::Example<at::Tensor, at::Tensor>, std::__1::allocator<torch::data::Example<at::Tensor, at::Tensor> > >) + 173 (0x166ba9bfd in libjnitorch.dylib)
frame #13: Java_org_bytedeco_pytorch_ExampleStack_apply_1batch + 202 (0x16624ae4a in libjnitorch.dylib)
frame #14: 0x0 + 4597285872 (0x1120503f0 in ???)
frame #15: 0x0 + 4597263424 (0x11204ac40 in ???)
at org.bytedeco.pytorch.ExampleStack.apply_batch(Native Method)
at com.tensor.conv.exampleConverter$.main(exampleConverter.scala:26)
at com.tensor.conv.exampleConverter.main(exampleConverter.scala)
Process finished with exit code 1
We can now use ChunkDataReader and ChunkDataset for this now, see commit https://github.com/bytedeco/javacpp-presets/commit/fa4dfdc4d7d4f1912535ba2ac682cd32fa13eb98.
Duplicate of #1215
Duplicate of #1215
I want to know when we will release these
Sometime next year. In the meantime, snapshots are always available: http://bytedeco.org/builds/
Sometime next year. In the meantime, snapshots are always available: http://bytedeco.org/builds/
OK, I will waiting for release ,If convenient ,please add sequeeceSampler this class togother,thanks
SequentialSampler? It's already there: https://github.com/bytedeco/javacpp-presets/blob/master/pytorch/src/gen/java/org/bytedeco/pytorch/SequentialSampler.java
so wonderful , feel exciting, if one day the torch-serve could support the javacpp-pytorch , it is will a completed ml pipeline for java or scala ml env
I think it's more likely to get it integrated into DJL than TorchServe, but either way someone needs to spend time (that is money) on this...
@sbrunk What do you think we should be doing for serving? We need to reuse something that already exists...
I think until we're able to export a model to TorchScript, we can't use an existing serving library. Both TorchServe and DJL for example need a TorchScript model (or a pure Python model for TorchServe) for inference.
As far as I understand, the C++ API currently does not support methods like tracing or scripting to produce TorchScript modules. It might be possible to create them manually via the API though, so it could be possible to build tracing functionality. HaskTorch which also uses libtorch seems to support tracing, but I guess it's a non-trivial effort.
It's always possible though to use any JVM REST/GRPC/... framework to build a service doing inference on a model written in Java to provide a simple serving solution.
Actually, I might be wrong for DJL. While it seems to only supports TorchScript and other standard model formats out of the box, perhaps providing an integration with JavaCPP PyTorch based models is not too difficult.
Right, the APIs of torch::jit::Module
and torch::nn:Module
are pretty much the same. It's more a question of who's going to maintain the integration in DJL. If I understand correctly, the guys over there don't receive much requests about training anymore, so DJL is pretty much geared towards inference only. @frankfliu Am I right?
I'm sorry, I don't have context here. Are you trying to train your model in java?
DJL do support training in Java, you can try it out. But we do focus on optimization for inference. If you are interested in Serving, you might want to take a look DJLServing, DJLServing is a super set of TorchServe, it can even serve .MAR
file out of the box.
I'm sorry, I don't have context here. Are you trying to train your model in java?
Not just train models, but create them in Java as well. DJL doesn't support enough features to make it useful for that.
DJL do support training in Java, you can try it out. But we do focus on optimization for inference. If you are interested in Serving, you might want to take a look DJLServing, DJLServing is a super set of TorchServe, it can even serve
.MAR
file out of the box.
Right, that's what I thought. So if someone wanted to serve a model created with the C++ API of PyTorch, that someone would first need to update that stuff to make it work with such a model, right?
Hi, without the dataset , I need to create the class Example instance ,but meet error ,I don't find the tutorial for this.please
error log