Closed haizzus closed 4 months ago
It sounds like the failed boot count trigger is tripping, and the partition is being marked as unbootable. You can clear this flag in the uefi menu system. Though, the only way this should happen is if the system doesn't boot all the way and mark the boot as successful - this mechanism has changed a few times over the years but I think the nv_update_verifier.service is what does this. If you don't crash your system, but just restart it a number of times without powering down, does it still get into this state?
If I don't crash -> it's fine, i can reboot as many as I can. The crash happens quite late so I would assume all marking already done, but I get your point. I will check. Also, it's possible that BOTH rootfs were marked as unbootable.
confirmed both partitions were marked as "unbootable"
I have a custom orin box, running yocto kirkstone, A/B enabled. The box has custom cameras (and custom kernel driver) for it. The issue is let's say there is a bug in camera driver, and it crashes the OS. Then it resets itself (watchdog?) and after enough times it somehow corrupts the OS and stuck in EFI console. This could happen to any drivers so I would like to have a good understand why this happen, and a workaround (beside the driver fix of course). I already tried to mount EFI parition as readonly (and EFIVAR partition also) but it didn't prevent the issue. Also, if let's say partition A is corrupted, how do I tell it's the case? Is there a command in EFI to tell? Shouldn't it switch to partition B?