NVIDIA / cutlass

CUDA Templates for Linear Algebra Subroutines
Other
4.96k stars 848 forks source link

[QST]Why fp8 convert only has float2fp8 function without ptx ? #1564

Open WtDMaO opened 1 month ago

WtDMaO commented 1 month ago

What is your question? Why does the CUDA Toolkit only provide an implementation for double2fp8 in the conversion to FP8, while CUTLASS only provides float2fp8? For FP16 and FP32, the CUDA Toolkit uses a step-by-step widening of bit-width to double for the conversion to FP8. Is this completely equivalent? Why is there no implementation for fp162fp8 in non-PTX scenarios?"

WtDMaO commented 1 month ago

I speculate that perhaps because there is no performance requirement in non-PTX scenarios, and since the widening of bit-width is completely equivalent, only implementations for conversion from double or higher bit-width numbers are provided.

github-actions[bot] commented 2 weeks ago

This issue has been labeled inactive-30d due to no recent activity in the past 30 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed. This issue will be labeled inactive-90d if there is no activity in the next 60 days.