With multiple neutrino peers supplied through lnd.conf/neutrino.Config structure, it seems that NewChainService() will try to resolve the peers' addresses and try to connect to them. However as soon as one of these addresses are deemed invalid, NewChainService() stops attempting and simply returns with error immediately, without trying the remaining peer addresses.
At first glance it can be a fairly trivial change to remedy this? But not sure if there are other implications, like whether the neutrino.Config structure should be updated to remove the invalid addresses, etc.
// Start up persistent peers.
permanentPeers := cfg.ConnectPeers
if len(permanentPeers) == 0 {
permanentPeers = cfg.AddPeers
}
var peerConnErr error
var peersConnected int
for _, addr := range permanentPeers {
tcpAddr, err := s.addrStringToNetAddr(addr)
if err != nil {
peerConnErr = err
} else {
peersConnected++
go s.connManager.Connect(&connmgr.ConnReq{
Addr: tcpAddr,
Permanent: true,
})
}
}
if peersConnected != 0 {
peerConnErr = nil
}
return &s, peerConnErr
I can create a Pull Request if this seems like that right direction to go?
With multiple neutrino peers supplied through lnd.conf/neutrino.Config structure, it seems that NewChainService() will try to resolve the peers' addresses and try to connect to them. However as soon as one of these addresses are deemed invalid, NewChainService() stops attempting and simply returns with error immediately, without trying the remaining peer addresses.
At first glance it can be a fairly trivial change to remedy this? But not sure if there are other implications, like whether the neutrino.Config structure should be updated to remove the invalid addresses, etc.
I can create a Pull Request if this seems like that right direction to go?