fluidd-core / fluidd

Fluidd, the klipper UI.
https://docs.fluidd.xyz
GNU General Public License v3.0
1.34k stars 399 forks source link

Spoolman integration erroneously reports missing metadata #1408

Closed matmen closed 2 months ago

matmen commented 3 months ago

Reported by @claudermilk in https://github.com/Arksine/moonraker/issues/838 (and various other users on Discord):

What happened

Sometimes when starting a print job, after selecting a spool for the Spoolman integration the error message "The amount of filament required to print the selected file is unknown. Do you wish to continue?" is displayed.

Client

Fluidd

Browser

Chrome

How to reproduce

After a print job is completed, start another one. It's intermittent, so no telling when it throws the error.

Additional information

Issuing a firmware_restart clears the issue. I have seen this on two different printer installations; one just got a complete software and firmware re-install. I have seen this since the Spoolman integration was introduced. Running on RPi Debian Lite on both a RPi 3A+ and 4. Spoolman is a standalone server currently running the current 0.16 (was 0.13--same behavior).

The relevant log error line is: [common.py:build_error()] - JSON-RPC Request Error - Requested Method: server.files.metadata, Code: -32601, Message: Metadata not available for

So far I've only seen this happen when a file is moved/renamed/deleted, but the reprint tab hasn't updated its contents yet - but then it would also error when trying to start a print.

matmen commented 3 months ago

@claudermilk Please upload your moonraker log file here

claudermilk commented 2 months ago

Conveniently, one of the printers immediately did this. moonraker.log

matmen commented 2 months ago

Thanks. From the logs I can see fluidd requesting the metadata first and then the file being uploaded after (~20s later), but that doesn't make much sense. I assume the file was already on your host and you re-uploaded it after fluidd showed the error message? Either way, the request looks fine to me, assuming the file isn't located in some sub-directory. Can I ask if you tried starting the print job via the Jobs page or via the reprint tab on the dashboard?

claudermilk commented 2 months ago

It will do it whether I start from reprint or the Jobs panel. It seems more frequent with reprint. I have seen this with files in the "root" for gcodes and subdirectories within that. I am not re-uploading the file, I am using one already there.

I've seen this on dozens of files sliced from both Super Slicer and Orca, many times from a file that just printed successfully; like the job finishes, I clear the plate, and hit print, then kaboom. So I know the file is fine, and it doesn't seem to have any connection with the specific slicer or version.

Arksine commented 2 months ago

Something is reuploading the file, its present in the log. I suspect that this behavior is related to the metadata issue.

IIRC, these slicers have a browser integration. Are you using them to open Fluidd? I thought it was strange that Fluidd would upload using the /api/files/local endpoint.

If you are using their integration, can you reproduce this straight from the browser?

In addition, it would be helpful to enable verbose logging as I mentioned in the other issue, however lets track it here for now:

Add the following to /home/pi/printer_data/systemd/moonraker.env:

MOONRAKER_VERBOSE_LOGGING="y"

Then restart the service:

sudo systemctl restart moonraker

It will create large log files, so you will want to disable it after reproducing the error.

claudermilk commented 2 months ago

Yes, Orca has that integration, but I don't use it. I'm going straight to the web interface in Chrome. I've updated my moonraker.env...and went to try to reproduce the error and the printer obliged and immediately did it for me. moonraker (1).log

The printer sat idle all night, After restarting the moonraker service, I warmed it up with a defined preset in Fluidd and told it to print an existing file from the Jobs panel on the Home screen. I accepted the current active spool in Spoolman and it threw the error. This is a file that I ran yesterday with no error, so it is a known good file.

matmen commented 2 months ago

Thanks, I think I've found the issue. When there's an ongoing print, we subtract the already printed filament length from the total length (in order to check if there's enough filament on the selected spool to finish the print). I think the logic here is broken and subtracting the previous prints used filament, resulting in 0 required filament to finish the new print, resulting in fluidd triggering an error message. I think we can get a fix out for this in the next fluidd release.

matmen commented 1 month ago

@claudermilk Can you confirm that updating to fluidd v1.30.0 fixed the issue?

claudermilk commented 1 month ago

Yes, it has. I haven't seen the error since the update.