liuliu / s4nnc

Swift for NNC
https://libnnc.org
BSD 3-Clause "New" or "Revised" License
70 stars 8 forks source link

How to initialize a tensor from a data pointer? #3

Closed ghost closed 1 year ago

ghost commented 1 year ago

I have an Array ( or an unsafe pointer ) containing data. How do i set the data of a Tensor from that memory location. Putting a for loop is slow. Any faster way?

liuliu commented 1 year ago

https://liuliu.github.io/s4nnc/documentation/nnc/tensor/init(_:_:_:) https://liuliu.github.io/s4nnc/documentation/nnc/tensor/init(numpy:) https://github.com/liuliu/s4nnc/blob/main/nnc/CoreMLConversion.swift#L8

ghost commented 1 year ago

Thanks

I tried this:

let arr = [Float16](repeating: 1, count: 2*64*64*2)
let ten = DynamicGraph.Tensor<UseFloatingPoint>(arr, .GPU(0) , .NHWC(2, 64, 64, 4) )

but this does not work. Could you give a snippet?

ghost commented 1 year ago

What about copying data from an array to a variable? is it a good idea to create a new variable over and over from the tensor?

liuliu commented 1 year ago

DynamicGraph.Tensor is not the Tensor I referenced to. It probably should be:

let ten = Tensor<Float16>(arr, .CPU, .NHWC(2, 64, 64, 4)).toGPU(0)

Also, if you just need to init to 0, you should just:

let ten = graph.variable(.GPU(0), .NHWC(2, 64, 64, 4), of: Float16.self)
ten.full(1)

Anyway, you probably should never manually construct DynamicGraph.Tensor<XXX> object, but use convenience method such as graph.variable(), graph.constant() or when you get DynamicGraph.AnyTensor use .as(of: XXX.self) to cast.

ghost commented 1 year ago

Thanks, you are amazing!!!

I am creating a new tensor for each mini batch, now if i call graph.varialbe over and over, will it keep on adding to the memory ( due to the cache) , or will it automatically GC when the refrence to the graph.variable is removed?

If i call graph.variable(my_tensor.toGPU(0))

liuliu commented 1 year ago

Depends. If you do

graph.withNoGrad {
}

Then we explicitly disabled autodiff, in this case, it will be automatically GC'ed when reference to it is removed.

If we don't disable autodiff, it might be kept even if all explicit references are removed because the autodiff (thus, if some values are derived from this variable, and we might need to compute its gradient, it will be kept in memory).

ghost commented 1 year ago

Okay cool, And is there a function to get array / pointer from a Tensor?

liuliu commented 1 year ago

If you using graph.variable, you can access its underlying Tensor with ten.rawValue, and after that, you can either use https://liuliu.github.io/s4nnc/documentation/nnc/tensor/withunsafebytes(_:) or https://github.com/liuliu/s4nnc/blob/main/nnc/CoreMLConversion.swift#L22 https://github.com/liuliu/s4nnc/blob/main/nnc/Tensor.swift#L899

ghost commented 1 year ago

Thanks