alpaka-group / llama

A Low-Level Abstraction of Memory Access
https://llama-doc.rtfd.io/
Mozilla Public License 2.0
80 stars 10 forks source link

Integer packing and arbitrary precision integers #184

Closed bernhardmgruber closed 2 years ago

bernhardmgruber commented 3 years ago

LLAMA could allow to pack integers into fewer bits than their usual size. Especially memory footprint sensitive data layouts frequently use such types to save memory. Another approach is the support of arbitrary precision integer types, which are also common in FPGA code, e.g. ap_int<N>: https://www.xilinx.com/html_docs/xilinx2020_2/vitis_doc/use_arbitrary_precision_data_type.html

E.g.: 3 12-bit integers forming an RGB value:

using RecordDim = llama::Record<
    llama::Field<R, llama::Int<12>>,
    llama::Field<G, llama::Int<12>>,
    llama::Field<B, llama::Int<12>>,
>;

An open design point is how a reference to such an object is formed, since a mapping may not place these objects at a byte-boundary. Thus, locations of such elements might not be addressable. A solution could be a proxy reference like in e.g. std::bitset<N>.

bernhardmgruber commented 3 years ago

Related, here is a compiler solution for quantized simulations: https://www.youtube.com/watch?v=0jdrAQOxJlY

bernhardmgruber commented 3 years ago

By adopting N2709, C23 will likely get such an integer type built-in: _BitInt(N), where N is the number of bits.

bernhardmgruber commented 2 years ago

With #420 we will get something similar that is actually better. The proposed bitpacking mappings leave the record dimension as is and only change the storage representation, which is what LLAMA should actually touch. The type in the record dimension is what will be used for computation and those types should stick with the fundamental types of the language.