I write a simple vec_add hip application and try to disassemble hip code to ISA code :
// HIP kernel. Each thread takes care of one element of c
global void vecAdd(double a, double b, double c, int n)
{
// Get our global thread ID
int id = blockIdx.xblockDim.x+threadIdx.x;
// Make sure we do not go out of bounds
if (id < n)
c[id] = a[id] + b[id];
}
int main(...){..}
run with:
_sudo rocprofv2 -i ./input.txt --plugin att auto --mode csv ./vaddhip
it's output:
_*Could not find att output kernel: ./kernel.txt**
the doc said that
On ROCm 6.0, ATT enables automatic capture of the ISA during kernel execution, and does not require recompiling. It is recommeneded to leave at "auto".
rocprofv2 -i input.txt --plugin att auto --mode csv
Did I miss any necessary steps ? I read the att.py code, it seems do nothing to auto capture the kernel code .
Problem Description
I write a simple vec_add hip application and try to disassemble hip code to ISA code :
// HIP kernel. Each thread takes care of one element of c global void vecAdd(double a, double b, double c, int n) { // Get our global thread ID int id = blockIdx.xblockDim.x+threadIdx.x; // Make sure we do not go out of bounds if (id < n) c[id] = a[id] + b[id]; }
int main(...){..}
run with: _sudo rocprofv2 -i ./input.txt --plugin att auto --mode csv ./vaddhip
it's output: _*Could not find att output kernel: ./kernel.txt**
the doc said that On ROCm 6.0, ATT enables automatic capture of the ISA during kernel execution, and does not require recompiling. It is recommeneded to leave at "auto".
rocprofv2 -i input.txt --plugin att auto --mode csv
Did I miss any necessary steps ? I read the att.py code, it seems do nothing to auto capture the kernel code .
DEVICE: name: gfx1030
Marketing Name: AMD Radeon RX 6900 XT
Operating System
Ubuntu
CPU
AMD® Ryzen 7 5700g with radeon graphics × 16
GPU
AMD Radeon VII
ROCm Version
ROCm 6.1.0
ROCm Component
No response
Steps to Reproduce
No response
(Optional for Linux users) Output of /opt/rocm/bin/rocminfo --support
No response
Additional Information
No response