Closed domluna closed 8 months ago
struct ggml_tensor { enum ggml_type type; enum ggml_backend_type backend; struct ggml_backend_buffer * buffer; int n_dims; int64_t ne[GGML_MAX_DIMS]; // number of elements size_t nb[GGML_MAX_DIMS]; // stride in bytes: // nb[0] = ggml_type_size(type) // nb[1] = nb[0] * (ne[0] / ggml_blck_size(type)) + padding // nb[i] = nb[i-1] * ne[i-1] // compute data enum ggml_op op; // op params - allocated as int32_t for alignment int32_t op_params[GGML_MAX_OP_PARAMS / sizeof(int32_t)]; bool is_param; struct ggml_tensor * grad; struct ggml_tensor * src[GGML_MAX_SRC]; // performance int perf_runs; int64_t perf_cycles; int64_t perf_time_us; // struct ggml_tensor * view_src; size_t view_offs; // void * data; char name[GGML_MAX_NAME]; void * extra; // extra things e.g. for ggml-cuda.cu char padding[12]; };
https://github.com/JuliaInterop/Clang.jl/blob/master/src/generator/codegen.jl#L546
when int64_t ne[GGML_MAX_DIMS]; or size_t nb[GGML_MAX_DIMS]; is encountered the assert will throw saying it's expecting that type to be a pointer.
Now if the self references to ggml_tensor are removed codegen works or if the const array fields are removed it works.
fixed in #453
https://github.com/JuliaInterop/Clang.jl/blob/master/src/generator/codegen.jl#L546
when int64_t ne[GGML_MAX_DIMS]; or size_t nb[GGML_MAX_DIMS]; is encountered the assert will throw saying it's expecting that type to be a pointer.
Now if the self references to ggml_tensor are removed codegen works or if the const array fields are removed it works.