lesismal / nbio

Pure Go 1000k+ connections solution, support tls/http1.x/websocket and basically compatible with net/http, with high-performance and low memory cost, non-blocking, event-driven, easy-to-use.
MIT License
2.17k stars 153 forks source link

how to increase the eventloop from the tls server example? #411

Closed ouvaa closed 6 months ago

ouvaa commented 6 months ago

original

2024/03/24 11:39:26.648 [INF] NBIO Engine[NB] start with [3 eventloop], listen on: ["tcp@127.0.0.1:8888"], MaxOpenFiles: 1048576

limited to 3 eventloops currently. Set NPoller but

changed to

./serverssl 
panic: runtime error: index out of range [3] with length 3

goroutine 1 [running]:
github.com/lesismal/nbio.(*Engine).Start(0xc000002300)
    /root/go/pkg/mod/github.com/lesismal/nbio@v1.5.3-0.20240324142751-a768b89f838c/engine_unix.go:73 +0x9b5
main.main()
    /home/ubuntu/nbio-examples/tls/server/serverssl.go:38 +0x48a

how to resolve? want NPoller = NumCPU actually so 12 or more.

package main

import (
        "log"

        "github.com/lesismal/llib/std/crypto/tls"
        "github.com/lesismal/nbio"
        ntls "github.com/lesismal/nbio/extension/tls"
)

func main() {
        cert, err := tls.X509KeyPair(rsaCertPEM, rsaKeyPEM)
        if err != nil {
                log.Fatalf("tls.X509KeyPair failed: %v", err)
        }
        tlsConfig := &tls.Config{
                Certificates:       []tls.Certificate{cert},
                InsecureSkipVerify: true,
        }

        g := nbio.NewEngine(nbio.Config{
                Network: "tcp",
                Addrs:   []string{"localhost:8888"},
        })
        g.NPoller = 12
//        g.NListener = 12 // there's no NListener
        isClient := false
        g.OnOpen(ntls.WrapOpen(tlsConfig, isClient, func(c *nbio.Conn, tlsConn *tls.Conn) {
                log.Println("OnOpen:", c.RemoteAddr().String())
        }))
        g.OnClose(ntls.WrapClose(func(c *nbio.Conn, tlsConn *tls.Conn, err error) {
                log.Println("OnClose:", c.RemoteAddr().String())
        }))
        g.OnData(ntls.WrapData(func(c *nbio.Conn, tlsConn *tls.Conn, data []byte) {
                log.Println("OnData:", c.RemoteAddr().String(), string(data))
                tlsConn.Write(data)
        }))

        err = g.Start()
        if err != nil {
                log.Fatalf("nbio.Start failed: %v\n", err)
                return
        }
        defer g.Stop()

        g.Wait()
}
lesismal commented 6 months ago

for current version, please init NListener/NPoller in config when NewEngine:

g := nbio.NewEngine(nbio.Config{
        Network: "tcp",
        Addrs:   []string{"localhost:8888"},
        NPoller: M,
        NListener: N,
})
ouvaa commented 6 months ago
go build -ldflags "-w -s" server.go 
# command-line-arguments
./server.go:30:9: unknown field NListener in struct literal of type nbio.Config
ouvaa commented 6 months ago

1 event loop is worst performing, 3 seems ideal, how many should i have? how to set this optimally other than test it all the time?

lesismal commented 6 months ago

usally, defalt configurations is good enough, if you want to optimize for your hardware specification, you need to do some test yourself.

another point, if you want to handle io with the balance between multi pollers goroutines, you can try to set nbio.Engine.NPoller=1 and nbio.Engine.AsyncRead=true, then the poller handle io event only, the reading will be handled in another goroutine pool, you can also customize the io goroutine pool by setting nbio.Engine.IOExecute func. that can avoid different num of connections handled in different pollers and cause different loads for each poller goroutine.

whatever you do to opt the performance, you should test it in your own env, because different env, input and output leads to different result.

ouvaa commented 6 months ago

@lesismal thank you