hotg-ai / rune

Rune provides containers to encapsulate and deploy edgeML pipelines and applications
Apache License 2.0
136 stars 15 forks source link

Emit a log statement before running each stage in the ML pipeline #335

Closed Michael-F-Bryan closed 3 years ago

Michael-F-Bryan commented 3 years ago

To help people troubleshoot slow Runes, we need a way to see how much time is spent on each stage of the Rune's pipeline.

My initial thoughts are to modify the code that is generated by rune build to use the built in logging system.

It could look like this:

  let pipeline = move || {
      let _guard = hotg_runicos_base_wasm::PipelineGuard::default();
+     log::debug!("Reading data from \"accelerometer\"");
      let accelerometer_0: Tensor<f32> = accelerometer.generate();
+     log::debug!("Running the \"normalize\" proc block");
      let normalize_0: Tensor<f32> = normalize.transform(accelerometer_0.clone());
+     log::debug!("Doing inference with the \"gesture\" model");
      let gesture_0: Tensor<f32> = gesture.transform(normalize_0.clone());
+     log::debug!("Running the \"most_confident_index\" proc block");
      let most_confident_index_0: Tensor<u32> = most_confident_index.transform(gesture_0.clone());
+     log::debug!("Running the \"label\" proc block");
      let label_0: Tensor<&str> = label.transform(most_confident_index_0.clone());
+     log::debug!("Sending results to the \"serial\" output");
      serial.consume(label_0.clone());
  };

Then we'll modify the PipelineGuard to emit a log statement when it is constructed and destroyed, letting us know when the pipeline starts and finishes.

Michael-F-Bryan commented 3 years ago

@Mohit0928, would you be interested in taking this on? It'd be a good issue for familiarising yourself with how Runes are built.