nberktumer / ha-bambu-lab-p1-spaghetti-detection

Bambu Lab Home Assistant Spaghetti Detection Integration.
GNU General Public License v3.0
145 stars 10 forks source link

Updated blueprint and Readme to account for new Bambu Labs Camera breaking change #26

Closed jmdevita closed 1 week ago

jmdevita commented 2 weeks ago

I was able to modify the blueprint to account for the breaking change in 2.0.23+. Because the image entity is now a camera entity, the refresh cadence was delayed to ~5 minute intervals, which was too slow to send updates to the Obico server. As a result, I built another blueprint variable "Printer Frame Refresh" that can be used to link when the printer should send an image to the Obico server. During testing, I have it working using the "Current layer" sensor. This effectively sends a photo to the server every time a new layer is made.

Screenshot 2024-10-28 at 3 46 55 PM

I couldn't find a sensor or state change in the Bambu lab integration that is quicker than that. For users who have the old integration ( <= 2.0.22 ), they can just replicate the image entity into this "Printer Frame Refresh" blueprint variable. However, I couldn't downgrade to test this, although logically this should work completely fine. This is explained in the Readme.

Finally, I also modified the reload_config_entry function to only apply to the older integration ( <= 2.0.22 ) as with the current frame refresh at the layer level, this could theoretically be over 30 seconds, causing a false positive refresh.

matthawley commented 2 weeks ago

@jmdevita I appreciate this! I'm giving it a shot right now - though 1 thought, with only updating the "current frame" once per layer now, I suspect we may not be analyzing the response until after 30 layers?

if:
  - condition: numeric_state
    entity_id: number.bambu_lab_p1_spaghetti_detection_current_frame_number
    below: 30
then:
  - stop: ''
alias: if current_frame_num < 30
jmdevita commented 2 weeks ago

@matthawley Yup, you're totally right. That's the con here that I couldn't find a way around without reducing the threshold.. Currently it waits until it gets a sample size of 30 before starting to aggregate into the other sensors. I couldn't find another sensor (unless having a timed loop) that updates as fast as the image in 2.0.22.

Having said that, my rational is if there is a spaghetti after ~30 layers, then it'll be caught after the 30th layer and won't be too much wasted filament. As a lot of prints are often in the 100s/1000s of layers.

desertblade commented 2 weeks ago

I am trying it out using Bambu Remaining time as Printer Frame Refresh. It looks like its refreshes every 60 seconds or so.

My current print covers most the plate, but its only 29 layers (3 hour print). So layer count wouldn't work

All the calculated fields seem to be updating now. The blueprint is a good workaround.

jmdevita commented 2 weeks ago

@desertblade that makes sense, good callout. At least it's adjustable for now so you aren't forced to use the current layer sensor. I'm going to keep looking when I can to see if there is a better sensor/method. Good idea with the remaining time sensor!

matthawley commented 2 weeks ago

@jmdevita yah, understood - though what I also gather is it may only check at intervals of 30? as the current count resets at that point.

matthawley commented 2 weeks ago

@jmdevita thoughts on using remaining time? If its time based, you could reincorporate the 30 sec delay that was conditioned out for non-image type?

matthawley commented 2 weeks ago

I just read the comments above ... great minds @desertblade ;)

nberktumer commented 1 week ago

Thank you for your contribution 🚀

I implemented the camera entity support with a timer which runs the prediction in every X seconds (configured via Detection Interval value in the automation). I think this solution will be more reliable.