Open 0pcom opened 6 months ago
a transport from machine 1 was established to machine 2
$ skywire-cli visor tp ls
type id remote_pk mode label
sudph eada53a2-6afb-05c7-aeea-6b804e798401 03c758a0daabab4b4bbd524cf46306212818b051b9d92949bd8acdf6226aad32e6 regular user
The machine 1 was left alone, and machine 2 was shut down, gracefully.
Upon returning to machine 1, the transport which had been established still existed (see above)
However, the entry for that transport did not exist in TPD and had been removed, apparently at the time the other machine was shut down.
No indication of any error or change in the visor debug logging for machine 1 was observed
[2024-03-23T19:26:08.412471062-05:00] DEBUG [visor]: Saving transport to 03c758a0daabab4b4bbd524cf46306212818b051b9d92949bd8acdf6226aad32e6 via sudph
[2024-03-23T19:26:08.41251521-05:00] DEBUG [transport_manager]: Initializing TP with ID eada53a2-6afb-05c7-aeea-6b804e798401
[2024-03-23T19:26:08.412626422-05:00] DEBUG [transport_manager]: Dialing transport to 03c758a0daabab4b4bbd524cf46306212818b051b9d92949bd8acdf6226aad32e6 via sudph
[2024-03-23T19:26:08.932671941-05:00] DEBUG [sudph]: Resolved PK 03c758a0daabab4b4bbd524cf46306212818b051b9d92949bd8acdf6226aad32e6 to visor data {70.121.23.42:60843 false {60843 [127.0.0.1 192.168.2.130 ::1]}}
[2024-03-23T19:26:08.932749682-05:00] DEBUG [sudph]: Dialing 70.121.23.42:60843
[2024-03-23T19:26:08.932903284-05:00] DEBUG [sudph]: Dialed 70.121.23.42:60843
[2024-03-23T19:26:08.932946149-05:00] DEBUG [sudph]: Performing handshake with 70.121.23.42:60843
[2024-03-23T19:26:08.941792824-05:00] DEBUG [sudph]: Sent handshake to 70.121.23.42:60843, local addr 0323272a60895f56aad82cb767fb5c413807adcf7c9fb0578b1b1c5807c7f29d4c:49159, remote addr 03c758a0daabab4b4bbd524cf46306212818b051b9d92949bd8acdf6226aad32e6:45
[2024-03-23T19:26:09.532743874-05:00] DEBUG [tp:03c758]: Sent signal to 'mt.transportCh'.
[2024-03-23T19:26:09.532832613-05:00] DEBUG [transport_manager]: saved transport: remote(03c758a0daabab4b4bbd524cf46306212818b051b9d92949bd8acdf6226aad32e6) type(sudph) tpID(eada53a2-6afb-05c7-aeea-6b804e798401)
[2024-03-23T19:26:09.532866677-05:00] DEBUG [visor]: Saved transport to 03c758a0daabab4b4bbd524cf46306212818b051b9d92949bd8acdf6226aad32e6 via sudph, label user
[2024-03-23T19:26:09.532933277-05:00] DEBUG [tp:03c758]: Serving. remote_pk=03c758a0daabab4b4bbd524cf46306212818b051b9d92949bd8acdf6226aad32e6 tp_id=eada53a2-6afb-05c7-aeea-6b804e798401 tp_index=1
Something we rarely look at is what happens on internet connection loss.
A visor with a running proxy server and the following transports was used
The network was disconnected temporarily via NetworkManager Applet on linux in order to observe the visor debug logging, and then the network was reconnected
The following visor debug logging was produced:
Observations
Further Examination
Attempt to re-establish the same transport type to the same visor succeeded
Deleting the transport also worked
Deleting transports that TPD says this visor has but which this visor does not have does not work
Further observation of the behavior of remote visors when transports fail is needed