Closed tgolsson closed 2 years ago
Hey, thanks for the report. As you may have noticed, symbols in NNEF/tract-opl are brand new. We ourselves have about half a model working with it at this point, so it's not a big surprise you run into issues.
The NNEF/tract-opl dumping is only meant to work after decluttering, so on a TypedModel, be fore calling "codegen", and independently of a TypedPlan.
I can actually reproduce your issue with:
use tract_nnef::internal::*;
fn main() -> TractResult<()> {
let n = Symbol::new('n');
let mut model = TypedModel::default();
let source = model.add_source("foo", TypedFact::dt_shape(DatumType::F32, &[n]))?;
model.set_output_outlets(&[source])?;
model.declutter()?;
tract_nnef::nnef().write_to_dir(&model, "foo")?;
Ok(())
}
I'll be looking into a fix.
Thank you @kali! Yeah, I meant TypedModel, not TypedPlan.
I think I have fixed it, and the fixed is merged on main. Would be nice if you can confirm.
@kali It now gets past the export stage; thank you! Will continue working on loading/inferring and see if more comes up there. Thanks for the quick fix.
Hello!
I'm trying to convert some of our stack to use NNEF as a shipped format instead of raw ONNX (for performance reasons at load time based on #393); but I'm running into a slight issue with batching. My expectation was that if I'm running
.to_typed()
I have to set input facts, and if I want to use dynamic batching that is my point to set a Symbol as the dim. However; this causes the NNEF serialization to crash on an unwrap -- see callstack below.Is the intention that I'll fix my batch size (e.g. = 1) before NNEF conversion and then update it with symbols again after? Am I doing too much work on the model before NNEF conversion? It seems to me like I have to to all the way to a TypedModel before I can do the conversion.
These are my input shapes:
Callstack: