oxidecomputer / omicron

Omicron: Oxide control plane
Mozilla Public License 2.0
251 stars 39 forks source link

Cold boot should handle scrimlet sled-agent restarts #4592

Closed rcgoodfellow closed 8 months ago

rcgoodfellow commented 11 months ago

During an update of rack 2, we encountered the following.

As sled agents began to launch, there was a bug (introduced by yours truly) that prevented the agents from getting out of early bootstrap. A new field added to the early network config caused a deserialization error that prevented sled agents from fully starting up. To work around this error, we read the persistent early network config file kept by the bootstore in /pool/int, added the missing field, and serialized the file back to /pool/int. We then restarted sled-agent. This caused sled-agent to read the updated early network config, which it was now able to parse. We had also bumped the generation number of the config, which caused the bootstore protocol to propagate this new value to all the other sled-agents.

At this point, things started to move forward again. Sled agents were transitioning from bootstrap-agent to sled-agent. However, we then hit another roadblock, the switches were not fully initialized. The sled-agent we restarted was a scrimlet sled-agent. So restarting it took down the switch zone and everything in it. When the switch zone came back up, it came up without any configuration. The dendrite service was not listening on the underlay, links had not been configured, addresses had not been configured, etc.

After looking through logs and various different states in the system, we decided to restart the same sled agent again. It got much further this time, with configured links and various other dpd state. However, the system was still not coming up. There was one node in the cluster that had synchronized with an upstream NTP server and had already launched Nexus (presumably in a brief period where the network was fully set up). Other nodes in the cluster had not made any real progress forward. This was because their NTP zones had not reached synchronization yet. After looking around more, we discovered this was due to the fact that there were missing NAT entries on the switches, and some missing address entries.

It appears that there were NAT entries created before our scrimlet sled-agent restart, and the act of restarting that sled-agent took out the switch zone clobbering these entries. I believe these entries were created by a different sled-agent, one with a boundary NTP zone that needed NAT. So when we restarted the scrimlet sled-agent, it had no idea it had missing NAT entries to repopulate. For the missing address entries, these were uplink addresses. They were present in the uplink SMF service properties, but they had not been added to the ASIC via Dendrite as local addresses. Not sure how that happened.

The takeaway here is that we need to be able to handle scrimlet sled-agent restarts during cold boot and keep driving forward toward the system coming back online, not getting stuck in half-configured states.

internet-diglett commented 8 months ago

@rcgoodfellow was #4857 sufficient to solve this, or do we also need the changes from #4822?

rcgoodfellow commented 8 months ago

We've confirmed we're in good shape here on a reboot of the scrimlet. But we should also test restarting just the sled agent service.

rcgoodfellow commented 8 months ago

This can now be closed, both scrimlet reboots and sled-agent restarts have been tested.

askfongjojo commented 8 months ago

Test1: reboot without any ongoing orchestration activities

Test2: reboot with ongoing orchestration activities

Test3: reboot with in-progress vm-to-vm traffic and a guest OS image import

Test4: repeat test3 on scrimlet1

askfongjojo commented 8 months ago

I saw an issue after the scrimlet reboot #5214. I haven't lined up all the timeline events but the new instances were all newly created after the reboot testing. It may be related to the scrimlet cold boot testing, regardless, this ticket can stay closed while we have more specifiy things to track down in #5214.