The new QTensorNetwork layer (now in default optimal "layer stack," "above" QUnitMulti) was envisioned as a wrapper for cuTensorNetwork, since Qrack did not choose to pursue its own tensor network methods (beyond rudimentary internals) for the past 6 years of project development. However, the layer proves valuable even withoutcuTensorNetwork, for now. (A planned upcoming patch release will add "transparent" switch-off between "conventional" Qrack simulators and a cuTensorNetwork-based implementation.)
In adjusting the paradigms of Qrack and cuTensorNetwork to work together, it seemed natural that any tensor network simulation still inherit from QInterface and implement the general user-code interface for all Qrack simulator types. (Notably, Compose(), Decompose(), Dispose(), and QAlu and QParity methods that lack QInterface decompositions are not supported by QTensorNetwork, now or potentially ever, though, residing as a level "above" the QUnit layer, Schmidt decomposition and Kronecker products are not actually of immediate benefit!) Implementing QInterface implies following a "Just-In-Time (JIT) state machine" model for simulators. Further, we'd like to uphold Qrack's design principle of "transparently" or automatically selecting "the right tool for the job," without major investment by users in understanding different potential configurations.
We have achieved this, with a wrappper on QCircuit! Since QCircuit was designed to anticipate "dynamic" resizing and use, whereas cuTensorNetwork ultimately needs many fixed-size allocations to represent tensors, it makes sense to offload as much work as possible via QCircuit before converting to cuTensorNetwork representation. Further, my guess (as developer) is that "conventional" Qrack simulation is going to be more efficient that cuTensorNetwork in its already-established domain of best applicability. Hence, we actually only need cuTensorNetwork at high-width and high-entanglement extremes, leading to a fully self-standing QTensorNetwork layer, whose internal state is a simple structure based on QCircuit and measurement, even withoutcuTensorNetwork, (though option to use that API, internally to QTensorNetwork, will follow soon)!
The new
QTensorNetwork
layer (now in default optimal "layer stack," "above"QUnitMulti
) was envisioned as a wrapper forcuTensorNetwork
, since Qrack did not choose to pursue its own tensor network methods (beyond rudimentary internals) for the past 6 years of project development. However, the layer proves valuable even withoutcuTensorNetwork
, for now. (A planned upcoming patch release will add "transparent" switch-off between "conventional" Qrack simulators and acuTensorNetwork
-based implementation.)In adjusting the paradigms of Qrack and
cuTensorNetwork
to work together, it seemed natural that any tensor network simulation still inherit fromQInterface
and implement the general user-code interface for all Qrack simulator types. (Notably,Compose()
,Decompose()
,Dispose()
, andQAlu
andQParity
methods that lackQInterface
decompositions are not supported byQTensorNetwork
, now or potentially ever, though, residing as a level "above" theQUnit
layer, Schmidt decomposition and Kronecker products are not actually of immediate benefit!) ImplementingQInterface
implies following a "Just-In-Time (JIT) state machine" model for simulators. Further, we'd like to uphold Qrack's design principle of "transparently" or automatically selecting "the right tool for the job," without major investment by users in understanding different potential configurations.We have achieved this, with a wrappper on
QCircuit
! SinceQCircuit
was designed to anticipate "dynamic" resizing and use, whereascuTensorNetwork
ultimately needs many fixed-size allocations to represent tensors, it makes sense to offload as much work as possible viaQCircuit
before converting tocuTensorNetwork
representation. Further, my guess (as developer) is that "conventional" Qrack simulation is going to be more efficient thatcuTensorNetwork
in its already-established domain of best applicability. Hence, we actually only needcuTensorNetwork
at high-width and high-entanglement extremes, leading to a fully self-standingQTensorNetwork
layer, whose internal state is a simple structure based onQCircuit
and measurement, even withoutcuTensorNetwork
, (though option to use that API, internally toQTensorNetwork
, will follow soon)!