not sure if it's on the usage or library side, but I create a new pinger in a thread calling pinger like this:
func ping_host(host string, count int, timeout int) error {
pinger, err := ping.NewPinger(host)
if err != nil {
return err
}
if runtime.GOOS == "windows" {
// Windows requires SetPrivileged
pinger.SetPrivileged(true)
} else {
// others can perfectly well do that with a system setting
// either
// setcap cap_net_raw=+ep /opt/slacheck/slacheck
// or
// sudo sysctl -w net.ipv4.ping_group_range="0 2147483647"
pinger.SetPrivileged(false)
}
pinger.Timeout = time.Second * time.Duration(timeout*count)
pinger.Interval = time.Second * time.Duration(timeout)
pinger.Count = count
err = pinger.Run()
if err != nil {
return err
}
stats := pinger.Statistics()
if stats.PacketLoss == 100 {
return errors.New("Host Timeout: " + host)
}
return nil
}
Once in a while (every 400-600 hosts I ping) I get the following crash:
panic: close of closed channel
goroutine 3301 [running]:
github.com/go-ping/ping.(*Pinger).recvICMP(0xc00005f540, 0xc001b02260, 0xc0011286c0, 0xc0018ac7e0, 0x0, 0x0)
C:/Users/ngrgeo/devel/go/pkg/mod/github.com/go-ping/ping@v0.0.0-20210312085107-d90f3778a8a3/ping.go:518 +0x372
created by github.com/go-ping/ping.(*Pinger).Run
C:/Users/ngrgeo/devel/go/pkg/mod/github.com/go-ping/ping@v0.0.0-20210312085107-d90f3778a8a3/ping.go:401 +0x299
This may be a bug in the threading / mutexes, seems recvICMP is not really thread safe? That said, I'm running 100 threads in parallel (have to ping a few thousand hosts).
I didn't take a closer look into the implementation yet, but I thought I'll raise the issue - can still work on that later.
Hi there,
not sure if it's on the usage or library side, but I create a new pinger in a thread calling pinger like this:
Once in a while (every 400-600 hosts I ping) I get the following crash:
This may be a bug in the threading / mutexes, seems recvICMP is not really thread safe? That said, I'm running 100 threads in parallel (have to ping a few thousand hosts).
I didn't take a closer look into the implementation yet, but I thought I'll raise the issue - can still work on that later.
//G