clearcontainers / proxy

Hypervisor based containers proxy
Apache License 2.0
32 stars 15 forks source link

cc-proxy/cc-shim high availability #4

Open sameo opened 7 years ago

sameo commented 7 years ago

From @sameo on December 2, 2016 17:36

If cc-proxy crashes:

  1. all cc-shim instances terminate.
  2. cc-proxy will not be able to restore its internal state after restarting.

We need to work on:

  1. Have cc-shim retry connecting to the proxy when the socket is closing/disappearing
  2. Have cc-proxy re-build all its states when restarting, based on the stored information

Copied from original issue: 01org/cc-oci-runtime#505

sameo commented 7 years ago

From @laijs on December 4, 2016 23:46

the virtio-serial is not package based transport, it seams hard to find the message header when cc-proxy re-connect to hyperstart.

dvoytik commented 7 years ago

Hi @dlespiau, are you doing anything related to this feature? If not then I'd like to hack on this if you don't mind.

dlespiau commented 7 years ago

Hi,

I'm doing the low level part of this, framing on top of the Host<->VM serial link so the proxy can recover the start of a frame when reconnecting to a running VM.

I haven't started on the task to save an on-disk state that the proxy can read from when starting again though. You could take that part.

dvoytik commented 7 years ago

Hi @dlespiau,

That's awesome! I've started experimenting exactly with on-disk re/store of the state as it's most obvious part for me. Okay. When I have something substantial to show I'll post here a WIP PR.

Cheers.

jodh-intel commented 7 years ago

Thanks @dvoytik! Feel free to create an issue and assign to yourself (and maybe reference this issue) so it's clear to the whole team that that is something you're working on.

dvoytik commented 7 years ago

@jodh-intel, done. Although I can't assign it to myself.

jodh-intel commented 7 years ago

@dvoytik - thanks - assigned.

sboeuf commented 7 years ago

@dlespiau any chance you have left some work in progress about the re-sync of a lost frame between proxy and VM serial port ?

dlespiau commented 7 years ago

Unfortunately, the work has been wiped out when I dd'ed /dev/urandom to my hard-drive :/

sboeuf commented 7 years ago

@dlespiau no worries, that's what I was expecting :p That's what you do when you move to something else !

sboeuf commented 7 years ago

@dlespiau BTW, we have a public IRC channel #clearcontainers on freenode. Come discuss about containers if you're interested ;)

jodh-intel commented 7 years ago

@sboeuf - could you outline what you know about this problem?

sboeuf commented 7 years ago

@jodh-intel I'll go further, trying to cover all the cases, and how our components should be modified. The case is simple, we have Clear Containers running, meaning all components runtime/shim/proxy/VM(agent) are up and running. When the proxy crashes, we have shim/runtime/agent detecting the proxy disconnection while they are trying to communicate with.

Here what should do all the components upon this detection:

  1. Shim

    • Try to reconnect for some time (already handled by this PR https://github.com/clearcontainers/shim/pull/54)
    • Buffer all the inputs and signals that cannot be forwarded to the proxy while it is getting restarted. When the connection is established again, the shim should send everything that has been buffered.
    • Save the last command so that we can re-send it after the reconnection to the proxy. Otherwise it's gonna be lost...
  2. Agent

    • Handle gracefully the re-connection of the proxy to continue properly where we left.
    • Buffer all outputs supposed to go through STDOUT/STDERR, so that the agent can send them after the proxy re-connection. Specific to IO channel.
    • Save the last command that got executed while the proxy was crashing, and save the result. The idea is that when the proxy is gonna reconnect, it's gonna send again the last command because it didn't get the result (this command is really gonna be triggered by the shim or the runtime when they reconnect). For that reason, the agent should analyze the command sent by the proxy after it reconnects, and not execute it (in case that matches the last command), but send the saved result, to avoid running the same command a second time. That way, the proxy will receive the result of this command.
    • In the same way, we should always save the last outputs that we are sending. This would allow the agent to resend the result when the same command is submitted from the shim or runtime, but that we don't want to re-run the command for real, because it could have different results.
  3. Runtime

    • Try to reconnect to the proxy.
    • Re-send the failing command. We should not report the command as failing unless this is not related to the proxy crash, but really because of an agent error.
  4. Proxy

    • Save the most recent states as soon as a modification occurs. Basically, every time a new token/sessionID is created because the runtime asked for. We need the proxy to have the exact map of tokens and seesion IDs when it gets recovered, so that it can directly receive outputs coming from the agent (the last one that never made it through + buffered ones).

@sameo @grahamwhaley @jodh-intel I might have missed few corner cases, but I'd like to get your input on this. This is pretty important since we need to agree before we can open the corresponding issues and start the implementation.

jodh-intel commented 7 years ago

Hi @sboeuf - thanks for this. If you don't mind, I'll merge the above with my notes and put it into a draft design (https://github.com/clearcontainers/runtime/issues/683) doc showing (a) what we have today and (b) what we want in the future...

jodh-intel commented 7 years ago

@sboeuf - I've now raised a doc PR including your comments above:

sboeuf commented 7 years ago

@jodh-intel great thanks !

sboeuf commented 7 years ago

But I'd like to get some feedback about it too. Does that make sense for everyone ?