UnownHash / Dragonite-Public

9 stars 9 forks source link

Dragonite not processing change in number of workers #13

Closed Jebula999 closed 10 months ago

Jebula999 commented 10 months ago

When changiing number of workers from 9 to 4 I get the following error.

dragonite    | INFO  [] RELOAD: Area 149 / Test worker change 9->4
dragonite    |
dragonite    |
dragonite    |  [Recovery] 2023/11/18 - 10:25:49 panic recovered:
dragonite    | runtime error: invalid memory address or nil pointer dereference
dragonite    | runtime/panic.go:261 (0x452237)
dragonite    | runtime/signal_unix.go:861 (0x452205)
dragonite    | github.com/unownhash/dragonite2/worker/questworker.go:371 (0xf3c048)
dragonite    | github.com/unownhash/dragonite2/worker/mode.go:225 (0xf3c03f)
dragonite    | github.com/unownhash/dragonite2/worker/modeswitcher.go:38 (0xf3c037)
dragonite    | github.com/unownhash/dragonite2/worker/workerarea.go:352 (0xf4b044)
dragonite    | github.com/unownhash/dragonite2/worker/starter.go:243 (0xf46b5d)
dragonite    | github.com/unownhash/dragonite2/routes/reload.go:22 (0xf6f4c4)
dragonite    | github.com/gin-gonic/gin@v1.9.1/context.go:174 (0x96c70a)
dragonite    | github.com/unownhash/dragonite2/routes/main.go:137 (0xf76c72)
dragonite    | github.com/gin-gonic/gin@v1.9.1/context.go:174 (0x978739)
dragonite    | github.com/gin-gonic/gin@v1.9.1/recovery.go:102 (0x978727)
dragonite    | github.com/gin-gonic/gin@v1.9.1/context.go:174 (0xf63d1c)
dragonite    | github.com/toorop/gin-logrus@v0.0.0-20210225092905-2c785434f26f/logger.go:43 (0xf63d03)
dragonite    | github.com/gin-gonic/gin@v1.9.1/context.go:174 (0x97761a)
dragonite    | github.com/gin-gonic/gin@v1.9.1/gin.go:620 (0x9772ad)
dragonite    | github.com/gin-gonic/gin@v1.9.1/gin.go:576 (0x976ddc)
dragonite    | net/http/server.go:2938 (0x6f75ed)
dragonite    | net/http/server.go:2009 (0x6f34d3)
dragonite    | runtime/asm_amd64.s:1650 (0x46e900)

Dragonite shows 4 workers in the Area Tab, but still shows all 9 in the Dashboard. All 9 workers still stay on the same Area. image image

If I then change the number of workers from the 4 I set, to say 5, I get the following error:

dragonite    | INFO  [] RELOAD: Area 149 / Test worker change 9->5
dragonite    |
dragonite    |
dragonite    |  [Recovery] 2023/11/18 - 10:25:49 panic recovered:
dragonite    | runtime error: invalid memory address or nil pointer dereference
dragonite    | runtime/panic.go:261 (0x452237)
dragonite    | runtime/signal_unix.go:861 (0x452205)
dragonite    | github.com/unownhash/dragonite2/worker/questworker.go:371 (0xf3c048)
dragonite    | github.com/unownhash/dragonite2/worker/mode.go:225 (0xf3c03f)
dragonite    | github.com/unownhash/dragonite2/worker/modeswitcher.go:38 (0xf3c037)
dragonite    | github.com/unownhash/dragonite2/worker/workerarea.go:352 (0xf4b044)
dragonite    | github.com/unownhash/dragonite2/worker/starter.go:243 (0xf46b5d)
dragonite    | github.com/unownhash/dragonite2/routes/reload.go:22 (0xf6f4c4)
dragonite    | github.com/gin-gonic/gin@v1.9.1/context.go:174 (0x96c70a)
dragonite    | github.com/unownhash/dragonite2/routes/main.go:137 (0xf76c72)
dragonite    | github.com/gin-gonic/gin@v1.9.1/context.go:174 (0x978739)
dragonite    | github.com/gin-gonic/gin@v1.9.1/recovery.go:102 (0x978727)
dragonite    | github.com/gin-gonic/gin@v1.9.1/context.go:174 (0xf63d1c)
dragonite    | github.com/toorop/gin-logrus@v0.0.0-20210225092905-2c785434f26f/logger.go:43 (0xf63d03)
dragonite    | github.com/gin-gonic/gin@v1.9.1/context.go:174 (0x97761a)
dragonite    | github.com/gin-gonic/gin@v1.9.1/gin.go:620 (0x9772ad)
dragonite    | github.com/gin-gonic/gin@v1.9.1/gin.go:576 (0x976ddc)
dragonite    | net/http/server.go:2938 (0x6f75ed)
dragonite    | net/http/server.go:2009 (0x6f34d3)
dragonite    | runtime/asm_amd64.s:1650 (0x46e900)

Once again Dragonite shows 5 workers in the Area Tab, but still shows all 9 in the Dashboard. All 9 workers still stay on the same Area. It also appears the error still "Thinks" the previous number of workers was 9, instead of the 4 set previously.

In order to fully update workers, the docker container needs to be restarted.

jfberry commented 10 months ago

There is an occasional race condition when changing worker numbers which happens especially if you change in quick succession or during startup. I plan to rewrite the worker adjuster but it is pretty core to worker allocation

jfberry commented 10 months ago

Having said that, I believe that I have closed this particular panic (others will exist given that reloading needs some attention!)