tracel-ai / burn

Burn is a new comprehensive dynamic Deep Learning Framework built using Rust with extreme flexibility, compute efficiency and portability as its primary goals.
https://burn.dev
Apache License 2.0
8.53k stars 420 forks source link

Support missing ONNX ops to import `optimum/all-MiniLM-L6-v2` #600

Open antimora opened 1 year ago

antimora commented 1 year ago

llogiq on Reddit requested support of missing ONNX Ops. We are filing this issue ticket to prioritize these ops

The model: https://huggingface.co/optimum/all-MiniLM-L6-v2/blob/main/model.onnx

The ops used in this model (checked if supported):

All these ops are implemented in Burn and they now have to be supported by burn-import's op.

ShivangRawat30 commented 1 year ago

Hey I am currently learning rust, as it is a good first issue can you please assign this to me

antimora commented 1 year ago

@ShivangRawat30, I would recommend you start with a unary operator Sqrt. Work on this and submit a PR. If you agree, just comment claim Sqrt.

If you have questions and want faster response, you can join Discord (you can find the link on Readme).

Anyone else wants to work on any of the ops, you can just comment claim (some operator).

ShivangRawat30 commented 1 year ago

claim Sqrt

AuruTus commented 1 year ago

👀 Hi I'm interested in this too. But I'm a bit confused if the new import file should be put under burn-import/src/burn/node/, and the burn impl in burn-tensor crate is what we should refer to, right? CMIIW 😃

antimora commented 1 year ago

@AuruTus, since all these ops already available in Burn, they have to be implemented in burn-import. I would recommend to start with something simple. Maybe with Tanh?

AuruTus commented 1 year ago

Thank you for the advice! Will look at it later.

claim Tanh

AuruTus commented 1 year ago

I have free time now and can finish more. Claim erf.

antimora commented 1 year ago

@AuruTus we added onnx file base testing to verify end to end conversion. Let me know if you have questions.

jmintb commented 11 months ago

Claim Pow

CohenAriel commented 11 months ago

Claim Gather

CohenAriel commented 10 months ago

@antimora I'm having some problems with the codegen test. Since in torch's gather the index attribute is an int tensor I get this error

thread 'burn::node::binary::tests::test_binary_codegen_gather' panicked at burn-import/src/burn/node/base.rs:240:9:
assertion failed: `(left == right)`

Diff < left / right > :
<use burn::tensor::Int;
 use burn::{
     module::Module,
     tensor::{backend::Backend, Tensor},
 };
 #[derive(Module, Debug)]
 pub struct Model<B: Backend> {
     phantom: core::marker::PhantomData<B>,
 }
 impl<B: Backend> Model<B> {
     #[allow(unused_variables)]
     pub fn new_with(record: ModelRecord<B>) -> Self {
         Self {
             phantom: core::marker::PhantomData,
         }
     }
     #[allow(clippy::let_and_return)]
     pub fn forward(&self, tensor1: Tensor<B, 2>, tensor2: Tensor<B, 2, Int>) -> Tensor<B, 2> {
         let tensor3 = tensor1.gather(1, tensor2);
         tensor3
     }
 }
antimora commented 10 months ago

@CohenAriel I am not sure what it is being compared to. Is it expected code or generated code?

CohenAriel commented 10 months ago

This is expected code. Top line is from generated code. I just realized there are no colors.

The generated code imports Int but the expected code doesn't.

antimora commented 10 months ago

In that can you will need to modify your expected test. You may have to use custom instead of using the macros in place. You should find some examples with int import. I can't point them to you because I am currently on the phone and it is to browse code.

edmondop commented 10 months ago

I'll grab slice