sophgo / tpu-mlir

Machine learning compiler based on MLIR for Sophgo TPU.
Other
611 stars 153 forks source link

尝试自己增加一个算子 #155

Open chenj133 opened 11 months ago

chenj133 commented 11 months ago

我在尝试自己增加一个mod算子 dialect那里,我看div算子核心代码如下: LogicalResult top::DivOp::init(InferenceParameter &p) { auto binary = new Binary(); int index0 = 0, index1 = 1; if (getIsReverse()) { index0 = 1, index1 = 0; } auto lhs_shape = module::getShape(getInputs()[index0]); auto rhs_shape = module::getShape(getInputs()[index1]);

(*binary) .hs(p.inputs[index0], p.inputs[index1], lhs_shape, rhs_shape) .dst(p.outputs[0], module::getShape(getOutput())) .do_relu(getDoRelu()) .relu_limit(getReluLimit().convertToDouble()) .algorithem(algorithm::binary_div) .setup();

p.handle = (void *)binary;

return success(); }

我没找到其中的algorithm::binary_div的实现方法,这个似乎不是c++标准库algorithm里的函数,但是他确实又直接import标准库的algorithm,这儿我该怎么实现把这个除法改成std::fmod函数

chenj133 commented 11 months ago

我照搬了copy的算子,把dialect里的Mod.cpp文件编辑如下:

//===----------------------------------------------------------------------===// // // Copyright (C) 2022 Sophgo Technologies Inc. All rights reserved. // // TPU-MLIR is licensed under the 2-Clause BSD License except for the // third-party components. // //===----------------------------------------------------------------------===//

include "tpu_mlir/Support/Module.h"

int64_t top::ModOp::getFLOPs() { return module::getNumElements(getOutput()); }

LogicalResult top::ModOp::init(InferenceParameter &p) { return success(); }

void top::ModOp::deinit(InferenceParameter &p) {}

LogicalResult top::ModOp::inference(InferenceParameter &p) { float input_data0 = p.inputs[0]; float input_data1 = p.inputs[1]; float *output_data = p.outputs[0];

auto shape = module::getI64Array(this->getShape()); auto i_stride = module::getI64Array(this->getInputStride()); auto o_stride = module::getI64Array(this->getOutputStride()); std::vector shape_4; std::vector i_stride_4; std::vector o_stride_4; shape_4 = {1, 1, 1, 1}; i_stride_4 = {0, 0, 0, 0}; o_stride_4 = {0, 0, 0, 0}; int num_dims = shape->size(); assert(num_dims <= 4); assert(i_stride->size() == shape->size()); assert(o_stride->size() == shape->size()); for (int end = num_dims - 1, idx = 3; end >= 0 && idx >= 0; end--, idx--) { shape_4[idx] = shape->at(end); i_stride_4[idx] = i_stride->at(end); o_stride_4[idx] = o_stride->at(end); }

for (int n = 0; n < shape_4[0]; n++) { for (int c = 0; c < shape_4[1]; c++) { for (int h = 0; h < shape_4[2]; h++) { for (int w = 0; w < shape_4[3]; w++) { int in_index = n i_stride_4[0] + c i_stride_4[1] + h i_stride_4[2] + w i_stride_4[3]; int out_index = n o_stride_4[0] + c o_stride_4[1] + h o_stride_4[2] + w o_stride_4[3]; output_data[out_index] = std::fmod(input_data0[in_index], input_data1[in_index]); } } } } return success(); }

void top::ModOp::shape_inference() { broadcast_shape_inference(getOperation()); for (int i = 0; i < getNumOperands(); i++) { auto value = getInputs()[i]; broadcast_tensor_reshape(getOutput(), value); } }

但是编译的时候报错 /workspace/tpu-mlir/lib/Dialect/Top/Interfaces/Mod.cpp:22:42: error: no member named 'getShape' in 'tpu_mlir::top::ModOp' auto shape = module::getI64Array(this->getShape()); 最前面跟Copy.cpp一样最前面加了#include "tpu_mlir/Support/Module.h",为啥会失败

lordrebel commented 10 months ago

maybe you forget add op definition in ODS file: include/tpu_mlir/Dialect/Tpu/IR/TpuOps.td ?