Cytnx-dev / Cytnx

Project Cytnx, A Cross-section of Python & C++,Tensor network library
Apache License 2.0
35 stars 14 forks source link

Refactor `Storage` with templates #469

Open IvanaGyro opened 2 months ago

IvanaGyro commented 2 months ago

Try to refactor Storage with templates to reduce duplicated code and prevent low-level memory management.

kaihsin commented 2 months ago

Make sure you staging the progress into multiple PRs instead of making a big one for all modification. Other than that LGTM so far

kaihsin commented 2 months ago

@IvanaGyro Let me know if you hit a point where the refactor is ready. I want to start moving Storage + Tensor into another repo cytnx-core

IvanaGyro commented 2 months ago

I suggest deciding whether to move Storage and Tensor after the refactoring process. Currently, only three files (excluding CUDA kernels) remain for Storage. Furthermore, we can combine Tensor with Tensor_Impl and merge Storage with Storage_base, provided we maintain the current API and ensure high memory efficiency. If we also consider replacing Storage with thrust::vector, there will only be two classes beneath Tensor. If there are only a few files below the Tensor level, keeping those files in the same repository could simplify maintenance.

kaihsin commented 2 months ago

I want to keep the wrapper class (Tensor and Storage) from their impl. We will need to separate the package for a few strategy reasons.

On Sat, Sep 14, 2024, 13:31 Ivana @.***> wrote:

I suggest deciding whether to move Storage and Tensor after the refactoring process. Currently, only three files (excluding CUDA kernels) remain for Storage. Furthermore, we can combine Tensor with Tensor_Impl and merge Storage with Storage_base, provided we maintain the current API and ensure high memory efficiency. If we also consider replacing Storage with trust::vector, there will only be two classes beneath Tensor. If there are only a few files below the Tensor level, keeping those files in the same repository could simplify maintenance.

— Reply to this email directly, view it on GitHub https://github.com/Cytnx-dev/Cytnx/pull/469#issuecomment-2351071650, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFCX3SLJQ3B3Q4T6B4CB6ATZWRXFNAVCNFSM6AAAAABOGBNGMSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNJRGA3TCNRVGA . You are receiving this because you commented.Message ID: @.***>

IvanaGyro commented 2 months ago

It's fine to leave Tensor and Storage as abstract classes apart from their implementation.

I wonder what are the reasons for splitting the repo? What are the plans for maintaining the dependencies in the linear algebra components, CUDA kernel, and the other high-level components?

kaihsin commented 2 months ago

It's fine to leave Tensor and Storage as abstract classes apart from their implementation.

I wonder what are the reasons for splitting the repo? What are the plans for maintaining the dependencies in the linear algebra components, CUDA kernel, and the other high-level components?

The plan is to separate UniTensor from the underlying Tensor, so other ppl can use Tensor and also bridge torch.Tensor, xtensor, jax and numpy directly.

kaihsin commented 2 months ago

All GPU, CUDA kernels for Tensor (and below) should go to cytnx-core

kaihsin commented 2 months ago

It's fine to leave Tensor and Storage as abstract classes apart from their implementation.

I wonder what are the reasons for splitting the repo? What are the plans for maintaining the dependencies in the linear algebra components, CUDA kernel, and the other high-level components?

Its more like a wrapper than the abstract class. Storage_base/ Tensor_impl was supposed to be the abstract class, and Tensor/Storage is the wrapper for C++ API

yingjerkao commented 2 months ago

It's fine to leave Tensor and Storage as abstract classes apart from their implementation. I wonder what are the reasons for splitting the repo? What are the plans for maintaining the dependencies in the linear algebra components, CUDA kernel, and the other high-level components?

Its more like a wrapper than the abstract class. Storage_base/ Tensor_impl was supposed to be the abstract class, and Tensor/Storage is the wrapper for C++ API

In light of this, I wonder if there is a class diagram explaining all these somewhere? It should be included in the developer's manual

kaihsin commented 2 months ago

I would love to chat with @IvanaGyro and ppl who are involved on this. I feel like it would be nice to have a conversation on why it was design in that way, what was the concern, and maybe there are better way to fulfill them.

A few convo among the issues makes me think we are not on the same page, and I think it is also hard for @IvanaGyro to work on stuffs when lacking informations

yingjerkao commented 2 months ago

Still it would be nice to have some documentation. At some point I understood the design but I don't recall all the details.

kaihsin commented 2 months ago

I suggest we have a meeting and maybe someone can take a note?

ianmccul commented 2 months ago

dtype and device would be better implemented using std::variant. That way, the enumeration of all of the possible types and devices only needs to happen once. That would turn a lot of runtime errors (such as missing some function for a specific dtype or device) into compile time errors, and much easier to maintain. Single or multiple dispatch is really easy with std::variant and visitors. With the current design, adding a new dtype would be a major hassle, and no way to check that all of the required functions are supplied without runtime testing.

Also, for the C++ code, is there a reason for restricting the types of a Tensor to a specific set? For Python it is obviously required that it is a bounded set, but C++ code could be able to construct a Tensor on any type. Eg, if I wanted to experiment with some Tensor<float128>, or Tensor<BigFloat>?

Edit: github ate my formatting!