Open BlendingInfinite opened 5 years ago
@oicirtap is working on a new API to address problems like this. @oicirtap is this solved by your API?
@oicirtap I would like to work on this. Is it possible that I can help you in some way?
Apologies for the late reply. I believe he's been working hard on this piece but we'd love to involve you in the API development. @oicirtap, could you opine about possible ways to involve @MoritzN89 (if you're still interested). @oicirtap will also write a masters thesis and some documentation documenting his work.
Sorry for the late reply. As Fred mentioned, if you are interested in adding deep copy functionality to the Tensor class that'd be great. Did you have a particular implementation in mind? I'd be happy to talk about different possible implementation and design decisions if it is helpful.
Thank you for giving me the possiblity to work on this API! I think a deep copy using the computation of a copy index expression could require unnecessary computational costs, because it seems to require a recompilation.
Maybe it is faster to deep copy
The helper functions can be inferred from the content pointer or also just be cloned. Besides helping to implement this approach, I would also like to make some benchmarks comparing the index expression method with the above stated implementation idea (with and without directly copying the helper functions). We can discuss this in a more detailed way, maybe via IRC, Slack or some other communication platform?
Hey @MoritzN89, again sorry for the delayed reply.
So regarding the deep copy approach to take, I agree with you that a copy index expression would potentially have a large overhead, so manually copying the content and coordinate buffer objects might be the best approach. I also think benchmarks are a great idea.
In terms of an alternate method of communication, we have a taco slack group, but I am not sure who coordinates it, or who has access to it exactly so I'll get back to you about that. I am not sure what IRC stands for. What do you think @fredrikbk? Maybe we could open a slack channel with MoritzN89?
I am about to write a wrapper to ease the value extraction of a tensor. Now, to access the values of the storage, one has to invoke the
pack()
function of the tensor instance first. However, if new tensor entries are inserted afterwards andpack()
is called again, all entries inserted before the previous call seems to be vanished.The only solution I currently have to access tensor values without removing them in the next step is to create a deep copy of a tensor. Currently, only shallow copies are possible. Probably this is not possible because of performance reasons. Anyway, I am convinced that such a functionality is necessary in general. I would be glad if someone has a solution for my purpose. The only solution I found is to change the framework internally by accessing the underlying buffers.