mbentley / docker-omada-controller

Docker image to run TP-Link Omada Controller
739 stars 110 forks source link

Constant disconnect from cloud. #268

Closed jonas740 closed 11 months ago

jonas740 commented 1 year ago

Describe the Bug

I have started to get intermittent notifications that my ER605 is disconnected from the controller. Today its been like 10 times but everything works at home and my local controller is accessible and does not show any errors from what I can see.

It does however show that a lot of clients was disconnected and then reconnected.

I am using TP-Link ER605 v2 and a EAP-615-Wall.

I did notice that the docker compose you provided contains more things nowadays but should that be the issue.

And I did try deploy that docker compose but that made all my settings go away and I did provide a backup so some settings reappeared but far from all so I went back to the old docker-compose. _omada-controller_logs.txt

Expected Behavior

Well I expect no disconnects.

Steps to Reproduce

Intermittent.

How You're Launching the Container

version: "3.1"
services:
   omada-controller:
     container_name: omada-controller
     image: mbentley/omada-controller:latest
     healthcheck:
      disable: true
     environment:
      - TZ=Etc/UTC
      - MANAGE_HTTP_PORT=8088
      - MANAGE_HTTPS_PORT=8043
      - PORTAL_HTTP_PORT=8088
      - PORTAL_HTTPS_PORT=8043
      - SHOW_SERVER_LOGS=true
      - SHOW_MONGODB_LOGS=false
      - SSL_CERT_NAME="tls.crt"
      - SSL_KEY_NAME="tls.key"
     network_mode: host
     volumes:
      - omada-data:/opt/tplink/EAPController/data
      - omada-work:/opt/tplink/EAPController/work
      - omada-logs:/opt/tplink/EAPController/logs
     restart: unless-stopped
volumes:
  omada-data:
  omada-work:
  omada-logs:

Container Logs

12-07-2022 21:08:49.151 INFO [https-jsse-nio-8043-exec-5] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:09:49.155 INFO [https-jsse-nio-8043-exec-4] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:10:49.156 INFO [https-jsse-nio-8043-exec-1] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:11:40.156 WARN [quartzScheduler_Worker-1] [] c.t.s.c.s.c.TaskExecutorService(): receive scheduled event with identity (log_limit_check, null) but did not execute because corresponding handler log_limit_check doesn't exist
12-07-2022 21:11:49.156 INFO [https-jsse-nio-8043-exec-5] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:12:49.159 INFO [https-jsse-nio-8043-exec-3] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:13:49.159 INFO [https-jsse-nio-8043-exec-2] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:14:49.169 INFO [https-jsse-nio-8043-exec-7] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:16:27.154 INFO [https-jsse-nio-8043-exec-7] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:16:27.411 INFO [https-jsse-nio-8043-exec-10] [] c.t.s.o.p.p.a.k(): Received invalid PortalBatchQueryDTO: PortalBatchQueryDTO(omadacId=64cde7237e224b81b1a287f7de0034f4, siteId=6337647ef54a8326902ac563, portalIds=[])
12-07-2022 21:16:40.157 WARN [quartzScheduler_Worker-2] [] c.t.s.c.s.c.TaskExecutorService(): receive scheduled event with identity (log_limit_check, null) but did not execute because corresponding handler log_limit_check doesn't exist
12-07-2022 21:17:28.648 INFO [https-jsse-nio-8043-exec-7] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:18:28.667 INFO [https-jsse-nio-8043-exec-6] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:19:28.670 INFO [https-jsse-nio-8043-exec-10] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:20:28.670 INFO [https-jsse-nio-8043-exec-4] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:21:28.664 INFO [https-jsse-nio-8043-exec-9] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:21:40.154 WARN [quartzScheduler_Worker-1] [] c.t.s.c.s.c.TaskExecutorService(): receive scheduled event with identity (log_limit_check, null) but did not execute because corresponding handler log_limit_check doesn't exist
12-07-2022 21:22:28.666 INFO [https-jsse-nio-8043-exec-7] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:23:28.675 INFO [https-jsse-nio-8043-exec-3] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:24:28.667 INFO [https-jsse-nio-8043-exec-9] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:25:28.681 INFO [https-jsse-nio-8043-exec-7] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:26:28.684 INFO [https-jsse-nio-8043-exec-3] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:26:40.156 WARN [quartzScheduler_Worker-2] [] c.t.s.c.s.c.TaskExecutorService(): receive scheduled event with identity (log_limit_check, null) but did not execute because corresponding handler log_limit_check doesn't exist
12-07-2022 21:27:28.690 INFO [https-jsse-nio-8043-exec-2] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:28:28.680 INFO [https-jsse-nio-8043-exec-8] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:29:28.693 INFO [https-jsse-nio-8043-exec-6] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:30:28.684 INFO [https-jsse-nio-8043-exec-10] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:31:28.683 INFO [https-jsse-nio-8043-exec-4] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:31:40.156 WARN [quartzScheduler_Worker-1] [] c.t.s.c.s.c.TaskExecutorService(): receive scheduled event with identity (log_limit_check, null) but did not execute because corresponding handler log_limit_check doesn't exist
12-07-2022 21:32:28.685 INFO [https-jsse-nio-8043-exec-1] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:33:02.956 INFO [client-inform-work-group-0] [] c.t.s.o.c.d.a.s(): Known wireless client C8-D3-FF-79-AC-EC is informed by gateway: 34-60-F9-CD-E5-CD, omadac: 64cde7237e224b81b1a287f7de0034f4, site: 6337647ef54a8326902ac563, handle as wired client now.
12-07-2022 21:33:28.702 INFO [https-jsse-nio-8043-exec-5] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:34:28.690 INFO [https-jsse-nio-8043-exec-4] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:35:28.706 INFO [https-jsse-nio-8043-exec-1] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:36:28.709 INFO [https-jsse-nio-8043-exec-5] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:36:40.155 WARN [quartzScheduler_Worker-1] [] c.t.s.c.s.c.TaskExecutorService(): receive scheduled event with identity (log_limit_check, null) but did not execute because corresponding handler log_limit_check doesn't exist
12-07-2022 21:37:28.713 INFO [https-jsse-nio-8043-exec-3] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:38:28.703 INFO [https-jsse-nio-8043-exec-9] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:39:28.716 INFO [https-jsse-nio-8043-exec-5] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:40:28.729 INFO [https-jsse-nio-8043-exec-3] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:41:28.724 INFO [https-jsse-nio-8043-exec-2] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:41:40.156 WARN [quartzScheduler_Worker-1] [] c.t.s.c.s.c.TaskExecutorService(): receive scheduled event with identity (log_limit_check, null) but did not execute because corresponding handler log_limit_check doesn't exist
12-07-2022 21:42:28.729 INFO [https-jsse-nio-8043-exec-7] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:43:28.712 INFO [https-jsse-nio-8043-exec-3] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:44:28.718 INFO [https-jsse-nio-8043-exec-9] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:45:28.716 INFO [https-jsse-nio-8043-exec-10] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:45:39.937 INFO [manage-work-group-6] [] c.t.s.o.m.d.p.t.c(): Device 34-60-F9-CD-E5-CD OmadacId(64cde7237e224b81b1a287f7de0034f4) changed to status CONNECTED_ERROR, which don't need to handle.
12-07-2022 21:45:39.939 INFO [discovery-work-group-1] [] c.t.s.o.m.d.d.m.b.a(): MANAGED_BY_OWN Device 34-60-F9-CD-E5-CD on omadac 64cde7237e224b81b1a287f7de0034f4 is discoveried.
12-07-2022 21:45:48.267 INFO [adopt-work-group-7] [] c.t.s.o.m.d.d.m.d.b.c(): Gateway OmadacId(64cde7237e224b81b1a287f7de0034f4) SiteId(6337647ef54a8326902ac563) DeviceMac(34-60-F9-CD-E5-CD) adopt[auto=true] ok
12-07-2022 21:45:48.286 INFO [adopt-work-group-7] [] c.t.s.o.m.d.d.m.a.c(): send empty setting to OmadacId(64cde7237e224b81b1a287f7de0034f4) DeviceMac(34-60-F9-CD-E5-CD)
12-07-2022 21:45:51.445 INFO [manage-work-group-9] [] c.t.s.o.m.d.p.t.c(): Device 34-60-F9-CD-E5-CD OmadacId(64cde7237e224b81b1a287f7de0034f4) changed to status CONNECTED, which don't need to handle.
12-07-2022 21:45:53.179 INFO [server-comm-pool-7] [] c.t.s.o.m.d.d.m.i.a(): got first inform of OmadacId(64cde7237e224b81b1a287f7de0034f4) DeviceMac(34-60-F9-CD-E5-CD)
12-07-2022 21:45:53.182 INFO [manage-work-group-10] [] c.t.s.o.m.d.d.m.i.e(): first inform no need send full config to OmadacId(64cde7237e224b81b1a287f7de0034f4) DeviceMac(34-60-F9-CD-E5-CD)
12-07-2022 21:45:53.182 WARN [manage-work-group-10] [] c.t.s.o.m.d.d.m.m.s.e(): fill set msg body but all setting null, skip send set msg to OmadacId(64cde7237e224b81b1a287f7de0034f4) DeviceMac(34-60-F9-CD-E5-CD)
12-07-2022 21:45:53.182 WARN [manage-work-group-10] [] c.t.s.o.m.d.d.m.m.s.e(): failed to genConfigBodyAndAddVersion with keys:[]
12-07-2022 21:45:53.183 INFO [manage-work-group-10] [] c.t.s.o.m.d.d.m.d.b.A(): syncConfigurationForSameVersion to OmadacId(64cde7237e224b81b1a287f7de0034f4) DeviceMac(34-60-F9-CD-E5-CD), result:SendDeviceMsgResult(success=false, deviceResponse=null)
12-07-2022 21:46:28.723 INFO [https-jsse-nio-8043-exec-7] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:46:40.155 INFO [comm-pool-13] [] c.t.s.o.l.c.a.a(): Start pushing connected devices.
12-07-2022 21:46:40.156 WARN [quartzScheduler_Worker-2] [] c.t.s.c.s.c.TaskExecutorService(): receive scheduled event with identity (log_limit_check, null) but did not execute because corresponding handler log_limit_check doesn't exist
12-07-2022 21:47:28.738 INFO [https-jsse-nio-8043-exec-6] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:48:28.730 INFO [https-jsse-nio-8043-exec-10] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:49:28.742 INFO [https-jsse-nio-8043-exec-4] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:49:40.160 WARN [manage-work-group-0] [] c.t.s.o.m.d.p.t.h.a(): send set request to OmadacId(64cde7237e224b81b1a287f7de0034f4) DeviceMac(34-60-F9-CD-E5-CD) fail, com.tplink.smb.ecsp.common.TransResult@5c50ef8[errCode=2600,msg=ERR_DEVICE_SEND_TCP_TIMEOUT,result=<null>,addressDTO=<null>]
12-07-2022 21:50:28.732 INFO [https-jsse-nio-8043-exec-6] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:51:28.735 INFO [https-jsse-nio-8043-exec-2] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:51:40.154 WARN [quartzScheduler_Worker-1] [] c.t.s.c.s.c.TaskExecutorService(): receive scheduled event with identity (log_limit_check, null) but did not execute because corresponding handler log_limit_check doesn't exist
12-07-2022 21:52:28.733 INFO [https-jsse-nio-8043-exec-8] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:53:28.740 INFO [https-jsse-nio-8043-exec-6] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:54:28.742 INFO [https-jsse-nio-8043-exec-10] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:54:49.095 INFO [https-jsse-nio-8043-exec-1] [] c.t.s.o.a.d.SpeedUpLoginController(): omadacId=64cde7237e224b81b1a287f7de0034f4 check login by TP CLOUD response:OperationResponse(errorCode=0, msg=Success., result=com.tplink.smb.omada.identityaccess.api.internal.dto.omadacloud.CheckLoginByTPCloudResponseDTO@e1c4b27)
12-07-2022 21:55:28.744 INFO [https-jsse-nio-8043-exec-9] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:56:28.749 INFO [https-jsse-nio-8043-exec-7] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:56:40.156 WARN [quartzScheduler_Worker-2] [] c.t.s.c.s.c.TaskExecutorService(): receive scheduled event with identity (log_limit_check, null) but did not execute because corresponding handler log_limit_check doesn't exist
12-07-2022 21:57:28.753 INFO [https-jsse-nio-8043-exec-8] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:58:28.751 INFO [https-jsse-nio-8043-exec-1] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 21:59:28.755 INFO [https-jsse-nio-8043-exec-7] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 22:00:28.767 INFO [https-jsse-nio-8043-exec-9] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 22:01:28.762 INFO [https-jsse-nio-8043-exec-3] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 22:01:40.156 WARN [quartzScheduler_Worker-2] [] c.t.s.c.s.c.TaskExecutorService(): receive scheduled event with identity (log_limit_check, null) but did not execute because corresponding handler log_limit_check doesn't exist
12-07-2022 22:02:10.019 INFO [client-schedule-statistic-task-0] [] c.t.s.o.c.d.a.m(): [STAT] {"wiredClients":9,"statName":"ClientStatInfo","authedClients":0,"wirelessClients":40}
12-07-2022 22:02:28.766 INFO [https-jsse-nio-8043-exec-5] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 22:03:28.766 INFO [https-jsse-nio-8043-exec-1] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 22:04:28.782 INFO [https-jsse-nio-8043-exec-7] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 22:05:28.772 INFO [https-jsse-nio-8043-exec-5] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 22:06:28.773 INFO [https-jsse-nio-8043-exec-2] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 22:06:40.157 WARN [quartzScheduler_Worker-1] [] c.t.s.c.s.c.TaskExecutorService(): receive scheduled event with identity (log_limit_check, null) but did not execute because corresponding handler log_limit_check doesn't exist
12-07-2022 22:07:28.777 INFO [https-jsse-nio-8043-exec-6] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 22:08:20.000 INFO [pool-4-thread-1] [] c.t.s.e.s.c.c(): [STAT] {"statName":"OmadaDeviceStat","deviceType":"gateway","ecspVer":"2.2.0","deviceNum":"1"}
12-07-2022 22:08:20.000 INFO [pool-4-thread-1] [] c.t.s.e.s.c.c(): [STAT] {"statName":"OmadaDeviceStat","deviceType":"ap","ecspVer":"2.3.0","deviceNum":"1"}
12-07-2022 22:08:28.781 INFO [https-jsse-nio-8043-exec-5] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 22:09:28.784 INFO [https-jsse-nio-8043-exec-8] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 22:10:28.799 INFO [https-jsse-nio-8043-exec-1] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 22:11:28.797 INFO [https-jsse-nio-8043-exec-8] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 22:11:40.157 WARN [quartzScheduler_Worker-2] [] c.t.s.c.s.c.TaskExecutorService(): receive scheduled event with identity (log_limit_check, null) but did not execute because corresponding handler log_limit_check doesn't exist
12-07-2022 22:12:28.794 INFO [https-jsse-nio-8043-exec-7] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 22:13:28.803 INFO [https-jsse-nio-8043-exec-9] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 22:14:28.793 INFO [https-jsse-nio-8043-exec-4] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 22:15:28.810 INFO [https-jsse-nio-8043-exec-6] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 22:16:28.798 INFO [https-jsse-nio-8043-exec-8] [] c.t.s.o.c.u.d.a(): list local interface macs: [48-4D-7E-E7-7C-15, 02-42-D1-9A-D2-A4, 02-42-A3-C0-58-02, 02-42-55-CC-E3-DA]
12-07-2022 22:16:40.154 WARN [quartzScheduler_Worker-1] [] c.t.s.c.s.c.TaskExecutorService(): receive scheduled event with identity (log_limit_check, null) but did not execute because corresponding handler log_limit_check doesn't exist

Additional Context

I don't know if recent cyber attacks ongoing here and there in Europe and Sweden can be the cause aswell.

mbentley commented 1 year ago

I should have asked which version of the controller you are currently on. I wasn't sure if you were pulling new versions or if you had an older version. I can't imagine anything being a problem specifically from a container packaging standpoint as you're re-using the host's network interface so the ports should be exposed from the host, assuming no port conflicts on the host. I don't have a TP-Liink router so I am not as familiar with those as much as APs. I've certainly seen a number of issues with people having disconnects with their router but not frequent.

Have you also submitted a forum post with TP-Link? I'm guessing it is more of a software issue but I don't know if any of the warnings are are important or not.

jonas740 commented 1 year ago

I should have asked which version of the controller you are currently on. I wasn't sure if you were pulling new versions or if you had an older version. I can't imagine anything being a problem specifically from a container packaging standpoint as you're re-using the host's network interface so the ports should be exposed from the host, assuming no port conflicts on the host. I don't have a TP-Liink router so I am not as familiar with those as much as APs. I've certainly seen a number of issues with people having disconnects with their router but not frequent.

Have you also submitted a forum post with TP-Link? I'm guessing it is more of a software issue but I don't know if any of the warnings are are important or not.

Hello.

I am using your latest controller. It has been working just great but yesterday connection with the cloud started acting weird.

And also, are the differences from your compose crucial then what I have? You have some more ports assigned if I am not mistaken.

Yes I have created an issue with tp-link. Could just be their servers acting up since the controller itself has not shown any disconnects with my router.

I did recently switch from EAP245 to the EAP-615-Wall, dunno if that could cause any issues.

mbentley commented 1 year ago

I'm assuming by latest you mean 5.7.4 (as you could be running the latest tag but not have the most recent image pulled down). So from a compose perspective, the port differences will not matter when you're using host mode as Docker ignores the ports published when in host mode as there is nothing to publish since they're directly connected to the host's network interface(s). Main thing I would suggest doing would be to validate that the ports are not in use (using something like netstat -ltnpu should help show the ports in use from your host as root; may be worth stopping the controller, running netstat to capture the ports and validate none are conflicting on the host). You can look at the ports and protocols from the latest compose example or look at the example docker run commands in the README.

Happy to look at the issue on the forums (assuming it isn't a private ticket) to see if they push back because the controller is running in a container.

jonas740 commented 1 year ago

yes i am running 5.7.4

And unfortunaley i made a ticket direct to TP-Link. But there are others on the tp link forum that has experianced the same issue

I did not mention this but this is something reported in the logs that is strange "The LAN IP address/mask of EAP615-Wall Livingroom were changed to 192.168.1.80/255.255.255.0" but that is the ip adress it has had from start.

https://community.tp-link.com/en/business/forum/topic/223458?page=2

https://community.tp-link.com/en/home/forum/topic/582160

I am using the Power injector that i used for my EAP245 for the EAP615-Wall, maybe that is the cause for problems. But this started yesterday and i have had the EAP615-Wall for 5 days or something like that. I have however ordered another power injector.

I changed my docker compose a bit and removed the old pulled image so this is how it looks like now.

` version: "3.1"

services: omada-controller: container_name: omada-controller image: mbentley/omada-controller:5.7 restart: unless-stopped stop_grace_period: 60s network_mode: host environment:

volumes: omada-data: omada-work: omada-logs: `

mbentley commented 1 year ago

I also see the "The LAN IP address/mask..." messages when my APs reconnect, regardless of IPs actually being changed so I don't think that's an issue. I'd only be concerned about the power injector if the device is actually going offline I suppose.

ganey commented 1 year ago

I have this to, my ER605 keeps missing the heartbeats, so my log is full of 'ER605 was disconnected' alerts, even though everything is working fine

tophee commented 1 year ago

Have you been able to solve this? I seem to be having the same or a similar issue. For me it's not just the router but all devices that intermittently disconnect from the controller... I suspect this is a bug in the actual controller, not the docker image. Did you get a response from TP-Link?

jonas740 commented 1 year ago

Hello.

I don't want to jinx things but since I updated to chromium release it does not seem to have behaved like it did.

I will have a look through the logs later this evening to be sure.

Tp-link did massive research and even had access to my controller but they could not give me an explanation at the time but they said they were going to hand it over to the development team.

And yes I might have forgotten to say so but it was a bit intermittent, some times only er605 was dropping and sometimes all units dropped out.

I will have a look later this evening and get back to you.

Best Regards / Jonas

Den tors 19 jan. 2023 08:49Chris @.***> skrev:

Have you been able to solve this? I seem to be having the same or a similar issue. For me it's not just the router but all devices that intermittently disconnect from the controller... I suspect this is a big in the actual controller, not the docker image. Did you get a response from TP-Link?

— Reply to this email directly, view it on GitHub https://github.com/mbentley/docker-omada-controller/issues/268#issuecomment-1396563469, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALDDKFTPCV6A6BSI353MFTTWTDWWXANCNFSM6AAAAAASXL6GIA . You are receiving this because you authored the thread.Message ID: @.***>

tophee commented 1 year ago

I can't seem to understand what that chromium version is or does... Apart from that it seems to solve this issue, what is it for?

it was a bit intermittent, some times only er605 was dropping and sometimes all units dropped out.

Exactly the same here: some times one, sometimes multiple devices drop out. And sometimes the Omada logs also show that they reconnected and sometimes they don't (but they still reconnect).

Here is some of the stuff I see in the container logs around the times when disonnects occur:

01-19-2023 05:37:59.337 INFO [manage-work-group-8] [] c.t.s.e.p.a.a(): Fail to send message REBUILD_RESPONSE to C0-C9-E3-4B-A3-EA, cause manage server route is null
01-19-2023 05:37:59.337 WARN [manage-work-group-8] [] c.t.s.o.m.d.p.t.g.a(): send rebuild reply to omadacId OmadacId(7a6c92d25f861217009ad14b40d49ea1) & mac DeviceMac(C0-C9-E3-4B-A3-EA) error, com.tplink.smb.ecsp.common.TransResult@42e9158[errCode=2001,msg=ERR_DEVICE_ROUTE_CACHE_NULL,result=<null>,addressDTO=<null>]
01-19-2023 05:38:00.022 INFO [device-timeout-workgroup-1] [] c.t.s.o.m.d.d.m.f.b(): Device DeviceMac(C0-C9-E3-4B-A3-EA) omadacId OmadacId(7a6c92d25f861217009ad14b40d49ea1) status change from Connected to Heartbeat Missed
01-19-2023 05:38:00.023 INFO [device-timeout-workgroup-0] [] c.t.s.o.m.d.d.m.f.b(): Device DeviceMac(C0-C9-E3-4B-A4-5C) omadacId OmadacId(7a6c92d25f861217009ad14b40d49ea1) status change from Connected to Heartbeat Missed
01-19-2023 05:38:28.690 INFO [discovery-work-group-68] [] c.t.s.o.m.d.d.m.b.a(): MANAGED_BY_OWN Device C0-C9-E3-4B-A3-EA on omadac 7a6c92d25f861217009ad14b40d49ea1 is discoveried.
01-19-2023 05:38:38.815 INFO [adopt-work-group-4] [] c.t.s.o.m.d.d.m.d.a.G(): Ap OmadacId(7a6c92d25f861217009ad14b40d49ea1) SiteId(Default) DeviceMac(C0-C9-E3-4B-A3-EA) adopt[auto=true] ok
01-19-2023 05:38:39.182 INFO [adopt-work-group-4] [] c.t.s.o.m.d.d.m.a.c(): send empty setting to OmadacId(7a6c92d25f861217009ad14b40d49ea1) DeviceMac(C0-C9-E3-4B-A3-EA)
01-19-2023 05:38:44.913 WARN [quartzScheduler_Worker-1] [] c.t.s.c.s.c.TaskExecutorService(): receive scheduled event with identity (log_limit_check, null) but did not execute because corresponding handler log_limit_check doesn't exist
01-19-2023 05:38:44.934 INFO [monitor-topology-pool-14] [] c.t.s.o.c.u.d.a(): list local interface macs: [76-DB-FF-2F-A5-B1, 76-DB-FF-2F-A5-B1, CE-3F-6E-D4-A7-EC, 02-42-1F-FD-64-F3, 02-42-B0-A1-AD-36, 02-42-B2-8C-9B-11, 02-42-C5-08-55-A0, 02-42-79-DA-B8-26, 02-42-55-B8-BC-A1, 02-42-7C-2F-A0-24, 02-42-D1-40-F2-35, 02-42-27-A7-67-12, 02-42-C4-48-86-98, 02-42-77-DA-5F-38, 02-42-C5-8B-85-1C, 02-42-5A-9E-6D-53, 02-42-95-14-90-AF, 02-42-11-55-C6-7C, 02-42-81-13-3E-74, 02-42-D0-A5-28-48, 02-42-0F-CC-72-90, 02-42-9A-5B-A5-C8, 02-42-DE-7C-E5-26]
01-19-2023 05:38:45.049 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:C8-2B-96-10-DD-A9 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default
01-19-2023 05:38:45.049 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:28-6C-07-82-FF-CC is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default
01-19-2023 05:38:45.050 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:B4-E6-2D-79-F2-01 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default
01-19-2023 05:38:45.050 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:84-CC-A8-AD-88-F1 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default
01-19-2023 05:38:45.051 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:84-CC-A8-AD-29-5C is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default
01-19-2023 05:38:45.051 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:F6-1A-DD-17-C6-68 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default
01-19-2023 05:38:45.051 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:24-CE-33-A2-4E-26 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default
01-19-2023 05:38:45.052 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:F4-03-2A-63-73-4E is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default
01-19-2023 05:38:45.052 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:EC-FA-BC-C4-BC-70 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default
01-19-2023 05:38:45.053 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:00-55-DA-5F-3A-53 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default
01-19-2023 05:38:45.053 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:C8-2B-96-10-DF-9D is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default
01-19-2023 05:38:45.054 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:EC-FA-BC-C4-B7-26 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default
01-19-2023 05:38:45.054 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:C8-2B-96-10-D5-2F is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default
01-19-2023 05:38:45.054 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:84-CC-A8-AD-20-E9 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default
01-19-2023 05:38:45.055 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:00-04-20-1B-E2-66 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default
01-19-2023 05:38:45.055 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:24-DF-A7-E8-60-A4 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default
01-19-2023 05:38:45.055 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:DC-54-D7-5D-DA-50 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default
01-19-2023 05:38:45.056 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:3C-71-BF-2C-07-FE is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default
01-19-2023 05:38:45.056 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:00-04-20-1E-7F-59 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default
01-19-2023 05:38:48.795 INFO [manage-work-group-15] [] c.t.s.o.m.d.p.t.c(): Device C0-C9-E3-4B-A3-EA OmadacId(7a6c92d25f861217009ad14b40d49ea1) changed to status CONNECTED, which don't need to handle.
01-19-2023 05:38:53.833 INFO [server-comm-pool-0] [] c.t.s.o.m.d.d.m.i.a(): got first inform of OmadacId(7a6c92d25f861217009ad14b40d49ea1) DeviceMac(C0-C9-E3-4B-A3-EA)
01-19-2023 05:38:53.867 INFO [manage-work-group-0] [] c.t.s.o.m.d.d.m.i.d(): first inform send same version config to OmadacId(7a6c92d25f861217009ad14b40d49ea1) DeviceMac(C0-C9-E3-4B-A3-EA)
01-19-2023 05:38:53.913 INFO [manage-work-group-13] [] c.t.s.o.m.d.d.m.d.a.R(): syncConfigurationForSameVersion to OmadacId(7a6c92d25f861217009ad14b40d49ea1) DeviceMac(C0-C9-E3-4B-A3-EA), result:SendDeviceMsgResult(success=true, deviceResponse=BaseConfigRespBody(sequenceId=245, errcode=0, configVersion=null, additionalProperties={}))
01-19-2023 05:39:31.460 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client 24-DF-A7-E8-60-A4 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now.
01-19-2023 05:39:31.512 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client 28-6C-07-82-FF-CC is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now.
01-19-2023 05:39:31.528 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client C8-2B-96-10-DD-A9 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now.
01-19-2023 05:39:31.536 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client C8-2B-96-10-D5-2F is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now.
01-19-2023 05:39:31.541 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client 3C-71-BF-2C-07-FE is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now.
01-19-2023 05:39:31.547 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client 84-CC-A8-AD-88-F1 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now.
01-19-2023 05:39:31.557 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client 00-04-20-1E-7F-59 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now.
01-19-2023 05:39:31.570 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client 00-55-DA-5F-3A-53 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now.
01-19-2023 05:39:31.584 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client F4-03-2A-63-73-4E is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now.
01-19-2023 05:39:31.596 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client 24-CE-33-A2-4E-26 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now.
01-19-2023 05:39:31.604 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client DC-54-D7-5D-DA-50 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now.
01-19-2023 05:39:31.616 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client 84-CC-A8-AD-29-5C is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now.
01-19-2023 05:39:31.627 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client C8-2B-96-10-DF-9D is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now.
01-19-2023 05:39:31.638 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client 00-04-20-1B-E2-66 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now.
01-19-2023 05:39:31.648 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client EC-FA-BC-C4-BC-70 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now.
01-19-2023 05:39:31.659 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client EC-FA-BC-C4-B7-26 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now.
01-19-2023 05:39:31.668 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client B4-E6-2D-79-F2-01 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now.
01-19-2023 05:39:31.677 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client 84-CC-A8-AD-20-E9 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now.
01-19-2023 05:39:31.689 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client F6-1A-DD-17-C6-68 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now.
01-19-2023 05:40:04.425 WARN [quartzScheduler_Worker-2] [] j.u.c.ThreadPoolExecutor$DiscardPolicy(): FaceBookV2PeriodVerify Schedule ThreadPool is full, Discarding task.
01-19-2023 05:40:20.051 INFO [client-inform-work-group-1] [] c.t.s.c.l.a.AbstractReadWriteLockService(): [readWriteLockService]businessId:omadac.id:7a6c92d25f861217009ad14b40d49ea1:site.id:Default:client.mac:24-DF-A7-E8-60-A4 get writeLock module:client.manager:client fail, execute onFail.
01-19-2023 05:40:20.096 WARN [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Failed to refresh informed client: InformedClient(omadacId=7a6c92d25f861217009ad14b40d49ea1, siteId=Default, mac=24-DF-A7-E8-60-A4, hostName=RM4-e8-60-a4, duration=395541, firstSeen=1673711261828, lastSeen=1674106802828, ip=192.168.1.101, clientType=unknown, wireless=false, connectDevMac=90-9A-4A-4A-46-DD, connectDevName=Switch SG2210P, connectDevType=switch, wirelessVid=null, ssid=null, wlanId=null, radioId=null, snr=null, ccq=null, rssi=null, aTime=null, channel=null, rxRate=null, txRate=null, activity=12503, download=437736436, upload=240172865, downloadPacket=1162496, uploadPacket=831996, wifiMode=null, powerSave=null, guest=false, associationTime=null, vid=1, networkName=LAN, port=4, lag=null, dot1x=FREE, dot1xIdentity=null, dot1xVid=0, osName=null, vendor=null, deviceCategory=null). Get client write lock timed-out.
01-19-2023 05:40:20.934 INFO [device-timeout-workgroup-1] [] c.t.s.o.m.d.d.m.f.b(): Device DeviceMac(C0-C9-E3-4B-A3-EA) omadacId OmadacId(7a6c92d25f861217009ad14b40d49ea1) status change from Connected to Heartbeat Missed
01-19-2023 05:40:24.906 WARN [quartzScheduler_Worker-2] [] j.u.c.ThreadPoolExecutor$DiscardPolicy(): FaceBookV2PeriodVerify Schedule ThreadPool is full, Discarding task.
01-19-2023 05:40:38.323 WARN [quartzScheduler_Worker-2] [] j.u.c.ThreadPoolExecutor$DiscardPolicy(): FaceBookV2PeriodVerify Schedule ThreadPool is full, Discarding task.
01-19-2023 05:40:39.895 INFO [quartzScheduler_Worker-1] [] c.t.s.o.c.b.c(): cloud schedule queue is full, discard.
01-19-2023 06:24:04.009 ERROR [manage-work-group-12] [] c.t.s.o.m.p.m.d.o.f(): omadacId 7a6c92d25f861217009ad14b40d49ea1Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27217, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27217, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}]
org.springframework.dao.DataAccessResourceFailureException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27217, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27217, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}]
    at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:95) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at org.springframework.data.mongodb.core.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:2929) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:2865) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:2581) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:2563) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at org.springframework.data.mongodb.core.MongoTemplate.find(MongoTemplate.java:868) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at org.springframework.data.mongodb.core.MongoTemplate.find(MongoTemplate.java:854) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at com.tplink.smb.omada.manager.port.mongo.device.osw.f.a(SourceFile:260) ~[manager-port-mongo-5.7.4.jar:5.7.4]
    at com.tplink.smb.omada.manager.port.mongo.device.osw.f.a(SourceFile:255) ~[manager-port-mongo-5.7.4.jar:5.7.4]
    at com.tplink.smb.omada.manager.port.mongo.device.osw.f.a(SourceFile:227) ~[manager-port-mongo-5.7.4.jar:5.7.4]
    at com.tplink.smb.omada.manager.port.mongo.device.osw.f.a(SourceFile:99) ~[manager-port-mongo-5.7.4.jar:5.7.4]
    at com.tplink.smb.omada.manager.device.domain.model.f.g.a(SourceFile:95) ~[manager-core-5.7.4.jar:5.7.4]
    at java.util.Optional.ifPresent(Optional.java:178) ~[?:?]
    at com.tplink.smb.omada.manager.device.domain.model.f.g.a(SourceFile:89) ~[manager-core-5.7.4.jar:5.7.4]
    at com.tplink.smb.component.lock.api.AbstractReadWriteLockService.doWithTryLockWrite(AbstractReadWriteLockService.java:107) ~[solution-components-lock-api-1.1.4.jar:1.1.4]
    at com.tplink.smb.omada.manager.common.c.a.a(SourceFile:80) ~[manager-core-5.7.4.jar:5.7.4]
    at com.tplink.smb.omada.manager.device.domain.model.f.g.a(SourceFile:85) ~[manager-core-5.7.4.jar:5.7.4]
    at com.tplink.smb.eventcenter.domain.DefaultDomainEventBus.tryHandle(DefaultDomainEventBus.java:222) ~[eventcenter.domain-1.3.2.jar:1.3.2]
    at com.tplink.smb.eventcenter.domain.DefaultDomainEventBus.lambda$callSubscribersAsync$2(DefaultDomainEventBus.java:189) ~[eventcenter.domain-1.3.2.jar:1.3.2]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
    at java.lang.Thread.run(Thread.java:833) [?:?]
Caused by: com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27217, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}]
    at com.mongodb.internal.connection.BaseCluster.getDescription(BaseCluster.java:181) ~[mongodb-driver-core-4.4.2.jar:?]
    at com.mongodb.internal.connection.SingleServerCluster.getDescription(SingleServerCluster.java:44) ~[mongodb-driver-core-4.4.2.jar:?]
    at com.mongodb.client.internal.MongoClientDelegate.getConnectedClusterDescription(MongoClientDelegate.java:144) ~[mongodb-driver-sync-4.4.2.jar:?]
    at com.mongodb.client.internal.MongoClientDelegate.createClientSession(MongoClientDelegate.java:101) ~[mongodb-driver-sync-4.4.2.jar:?]
    at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.getClientSession(MongoClientDelegate.java:291) ~[mongodb-driver-sync-4.4.2.jar:?]
    at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:183) ~[mongodb-driver-sync-4.4.2.jar:?]
    at com.mongodb.client.internal.MongoIterableImpl.execute(MongoIterableImpl.java:135) ~[mongodb-driver-sync-4.4.2.jar:?]
    at com.mongodb.client.internal.MongoIterableImpl.iterator(MongoIterableImpl.java:92) ~[mongodb-driver-sync-4.4.2.jar:?]
    at org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:2853) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    ... 19 more
01-19-2023 06:24:04.021 ERROR [client-device-event-work-group-0] [] c.t.s.e.d.DefaultDomainEventBus(): Event handle error. subscriber:com.tplink.smb.omada.client.domain.model.clientimage.j$$Lambda$1533/0x00000008019e9698@43ce5352
org.springframework.dao.DataAccessResourceFailureException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27217, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27217, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}]
    at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:95) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at org.springframework.data.mongodb.core.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:2929) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:2865) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:2581) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:2563) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at org.springframework.data.mongodb.core.MongoTemplate.find(MongoTemplate.java:868) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at org.springframework.data.mongodb.core.MongoTemplate.find(MongoTemplate.java:854) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at com.tplink.smb.omada.client.port.mongo.omada.client.a.c(SourceFile:255) ~[client-port-mongo-5.7.4.jar:5.7.4]
    at com.tplink.smb.omada.client.domain.a.r.a(SourceFile:144) ~[client-core-5.7.4.jar:5.7.4]
    at com.tplink.smb.omada.client.domain.model.clientimage.j.a(SourceFile:35) ~[client-core-5.7.4.jar:5.7.4]
    at com.tplink.smb.eventcenter.domain.DefaultDomainEventBus.tryHandle(DefaultDomainEventBus.java:222) ~[eventcenter.domain-1.3.2.jar:1.3.2]
    at com.tplink.smb.eventcenter.domain.DefaultDomainEventBus.callSubscribersSync(DefaultDomainEventBus.java:209) ~[eventcenter.domain-1.3.2.jar:1.3.2]
    at com.tplink.smb.eventcenter.domain.DefaultDomainEventBus.loopDomainEvent(DefaultDomainEventBus.java:168) ~[eventcenter.domain-1.3.2.jar:1.3.2]
    at com.tplink.smb.eventcenter.domain.DefaultDomainEventBus.publishSync(DefaultDomainEventBus.java:60) ~[eventcenter.domain-1.3.2.jar:1.3.2]
    at com.tplink.smb.omada.client.common.port.eventcenter.f.a(SourceFile:83) ~[client-common-5.7.4.jar:5.7.4]
    at java.util.Optional.ifPresent(Optional.java:178) ~[?:?]
    at com.tplink.smb.omada.client.common.port.eventcenter.f.handleEvent(SourceFile:64) ~[client-common-5.7.4.jar:5.7.4]
    at com.tplink.smb.eventcenter.core.DataProcessor.run(DataProcessor.java:31) ~[eventcenter.core-1.3.2.jar:1.3.2]
    at io.micrometer.core.instrument.internal.TimedRunnable.run(TimedRunnable.java:44) ~[micrometer-core-1.8.4.jar:1.8.4]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[?:?]
    at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
    at java.lang.Thread.run(Thread.java:833) [?:?]
Caused by: com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27217, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}]
    at com.mongodb.internal.connection.BaseCluster.getDescription(BaseCluster.java:181) ~[mongodb-driver-core-4.4.2.jar:?]
    at com.mongodb.internal.connection.SingleServerCluster.getDescription(SingleServerCluster.java:44) ~[mongodb-driver-core-4.4.2.jar:?]
    at com.mongodb.client.internal.MongoClientDelegate.getConnectedClusterDescription(MongoClientDelegate.java:144) ~[mongodb-driver-sync-4.4.2.jar:?]
    at com.mongodb.client.internal.MongoClientDelegate.createClientSession(MongoClientDelegate.java:101) ~[mongodb-driver-sync-4.4.2.jar:?]
    at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.getClientSession(MongoClientDelegate.java:291) ~[mongodb-driver-sync-4.4.2.jar:?]
    at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:183) ~[mongodb-driver-sync-4.4.2.jar:?]
    at com.mongodb.client.internal.MongoIterableImpl.execute(MongoIterableImpl.java:135) ~[mongodb-driver-sync-4.4.2.jar:?]
    at com.mongodb.client.internal.MongoIterableImpl.iterator(MongoIterableImpl.java:92) ~[mongodb-driver-sync-4.4.2.jar:?]
    at org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:2853) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    ... 21 more
01-19-2023 06:24:04.021 WARN [scheduled-pool-5] [] c.t.s.o.c.p.c.l(): Handle timeout client:ClientImageId(omadacId=7a6c92d25f861217009ad14b40d49ea1, siteId=Default, mac=5C-AD-CF-D7-9D-87) failed with exception.
org.springframework.dao.DataAccessResourceFailureException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27217, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27217, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}]
    at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:95) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at org.springframework.data.mongodb.core.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:2929) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:2865) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:2605) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at org.springframework.data.mongodb.core.ExecutableFindOperationSupport$ExecutableFindSupport.doFind(ExecutableFindOperationSupport.java:220) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at org.springframework.data.mongodb.core.ExecutableFindOperationSupport$ExecutableFindSupport.oneValue(ExecutableFindOperationSupport.java:132) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at org.springframework.data.mongodb.repository.query.AbstractMongoQuery.lambda$getExecution$4(AbstractMongoQuery.java:159) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at org.springframework.data.mongodb.repository.query.AbstractMongoQuery.doExecute(AbstractMongoQuery.java:132) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at org.springframework.data.mongodb.repository.query.AbstractMongoQuery.execute(AbstractMongoQuery.java:107) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    at org.springframework.data.repository.core.support.RepositoryMethodInvoker.doInvoke(RepositoryMethodInvoker.java:137) ~[spring-data-commons-2.6.3.jar:2.6.3]
    at org.springframework.data.repository.core.support.RepositoryMethodInvoker.invoke(RepositoryMethodInvoker.java:121) ~[spring-data-commons-2.6.3.jar:2.6.3]
    at org.springframework.data.repository.core.support.QueryExecutorMethodInterceptor.doInvoke(QueryExecutorMethodInterceptor.java:159) ~[spring-data-commons-2.6.3.jar:2.6.3]
    at org.springframework.data.repository.core.support.QueryExecutorMethodInterceptor.invoke(QueryExecutorMethodInterceptor.java:138) ~[spring-data-commons-2.6.3.jar:2.6.3]
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.3.18.jar:5.3.18]
    at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) ~[spring-aop-5.3.18.jar:5.3.18]
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.3.18.jar:5.3.18]
    at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:215) ~[spring-aop-5.3.18.jar:5.3.18]
    at jdk.proxy2.$Proxy156.findByMacAndSiteIdAndOmadacId(Unknown Source) ~[?:?]
    at com.tplink.smb.omada.client.port.mongo.omada.client.a.a(SourceFile:77) ~[client-port-mongo-5.7.4.jar:5.7.4]
    at com.tplink.smb.omada.client.domain.a.r.a(SourceFile:84) ~[client-core-5.7.4.jar:5.7.4]
    at com.tplink.smb.omada.client.port.cache.l.a(SourceFile:84) ~[client-core-5.7.4.jar:5.7.4]
    at io.reactivex.internal.observers.LambdaObserver.onNext(LambdaObserver.java:63) ~[rxjava-2.2.18.jar:?]
    at io.reactivex.internal.operators.observable.ObservableSubscribeOn$SubscribeOnObserver.onNext(ObservableSubscribeOn.java:58) ~[rxjava-2.2.18.jar:?]
    at io.reactivex.internal.operators.observable.ObservableMap$MapObserver.onNext(ObservableMap.java:62) ~[rxjava-2.2.18.jar:?]
    at io.reactivex.subjects.PublishSubject$PublishDisposable.onNext(PublishSubject.java:308) ~[rxjava-2.2.18.jar:?]
    at io.reactivex.subjects.PublishSubject.onNext(PublishSubject.java:228) ~[rxjava-2.2.18.jar:?]
    at com.tplink.smb.omada.client.port.cache.clientimage.b.b(SourceFile:456) ~[client-port-local-5.7.4.jar:5.7.4]
    at java.util.concurrent.ConcurrentHashMap$KeySetView.forEach(ConcurrentHashMap.java:4706) ~[?:?]
    at com.tplink.smb.omada.client.port.cache.clientimage.b.d(SourceFile:431) ~[client-port-local-5.7.4.jar:5.7.4]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[?:?]
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) ~[?:?]
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) ~[?:?]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
    at java.lang.Thread.run(Thread.java:833) [?:?]
Caused by: com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27217, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}]
    at com.mongodb.internal.connection.BaseCluster.getDescription(BaseCluster.java:181) ~[mongodb-driver-core-4.4.2.jar:?]
    at com.mongodb.internal.connection.SingleServerCluster.getDescription(SingleServerCluster.java:44) ~[mongodb-driver-core-4.4.2.jar:?]
    at com.mongodb.client.internal.MongoClientDelegate.getConnectedClusterDescription(MongoClientDelegate.java:144) ~[mongodb-driver-sync-4.4.2.jar:?]
    at com.mongodb.client.internal.MongoClientDelegate.createClientSession(MongoClientDelegate.java:101) ~[mongodb-driver-sync-4.4.2.jar:?]
    at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.getClientSession(MongoClientDelegate.java:291) ~[mongodb-driver-sync-4.4.2.jar:?]
    at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:183) ~[mongodb-driver-sync-4.4.2.jar:?]
    at com.mongodb.client.internal.MongoIterableImpl.execute(MongoIterableImpl.java:135) ~[mongodb-driver-sync-4.4.2.jar:?]
    at com.mongodb.client.internal.MongoIterableImpl.iterator(MongoIterableImpl.java:92) ~[mongodb-driver-sync-4.4.2.jar:?]
    at org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:2853) ~[spring-data-mongodb-3.3.3.jar:3.3.3]
    ... 32 more
jonas740 commented 1 year ago

I might have jinxed myself.

Because today I got a message that er605 was connected to controller.

So I checked the events log tab in the omada interface and I could see that it had connected but I could not find that that the ER605 was disconnected before that wich I find strange.

It also showed a lot of devices disconnected and connected but when using the actual network I have not noticed anything.

This is very strange and I have googled about it a lot and we ain't alone having this issue. An issue that not always means anything strange enough.

Are there any power saving features that causes this maybe?

There was an update a week ago for my ER605 and yesterday there was an update for my EAP-615-Wall so the update last night might have triggered something? /Jonas

Den tors 19 jan. 2023 11:03Chris @.***> skrev:

I can't seem to understand what that chromium version is or does... Apart from that it seems to solve this issue, what is it for?

it was a bit intermittent, some times only er605 was dropping and sometimes all units dropped out.

Exactly the same here: some times one, sometimes multiple devices drop out. And sometimes the Omada logs also show that they reconnected and sometimes they don't (but they still reconnect).

Here is some of the stuff I see in the container logs around the times when disonnects occur:

01-19-2023 05:37:59.337 INFO [manage-work-group-8] [] c.t.s.e.p.a.a(): Fail to send message REBUILD_RESPONSE to C0-C9-E3-4B-A3-EA, cause manage server route is null 01-19-2023 05:37:59.337 WARN [manage-work-group-8] [] c.t.s.o.m.d.p.t.g.a(): send rebuild reply to omadacId OmadacId(7a6c92d25f861217009ad14b40d49ea1) & mac DeviceMac(C0-C9-E3-4B-A3-EA) error, @.***[errCode=2001,msg=ERR_DEVICE_ROUTE_CACHE_NULL,result=,addressDTO=] 01-19-2023 05:38:00.022 INFO [device-timeout-workgroup-1] [] c.t.s.o.m.d.d.m.f.b(): Device DeviceMac(C0-C9-E3-4B-A3-EA) omadacId OmadacId(7a6c92d25f861217009ad14b40d49ea1) status change from Connected to Heartbeat Missed 01-19-2023 05:38:00.023 INFO [device-timeout-workgroup-0] [] c.t.s.o.m.d.d.m.f.b(): Device DeviceMac(C0-C9-E3-4B-A4-5C) omadacId OmadacId(7a6c92d25f861217009ad14b40d49ea1) status change from Connected to Heartbeat Missed 01-19-2023 05:38:28.690 INFO [discovery-work-group-68] [] c.t.s.o.m.d.d.m.b.a(): MANAGED_BY_OWN Device C0-C9-E3-4B-A3-EA on omadac 7a6c92d25f861217009ad14b40d49ea1 is discoveried. 01-19-2023 05:38:38.815 INFO [adopt-work-group-4] [] c.t.s.o.m.d.d.m.d.a.G(): Ap OmadacId(7a6c92d25f861217009ad14b40d49ea1) SiteId(Default) DeviceMac(C0-C9-E3-4B-A3-EA) adopt[auto=true] ok 01-19-2023 05:38:39.182 INFO [adopt-work-group-4] [] c.t.s.o.m.d.d.m.a.c(): send empty setting to OmadacId(7a6c92d25f861217009ad14b40d49ea1) DeviceMac(C0-C9-E3-4B-A3-EA) 01-19-2023 05:38:44.913 WARN [quartzScheduler_Worker-1] [] c.t.s.c.s.c.TaskExecutorService(): receive scheduled event with identity (log_limit_check, null) but did not execute because corresponding handler log_limit_check doesn't exist 01-19-2023 05:38:44.934 INFO [monitor-topology-pool-14] [] c.t.s.o.c.u.d.a(): list local interface macs: [76-DB-FF-2F-A5-B1, 76-DB-FF-2F-A5-B1, CE-3F-6E-D4-A7-EC, 02-42-1F-FD-64-F3, 02-42-B0-A1-AD-36, 02-42-B2-8C-9B-11, 02-42-C5-08-55-A0, 02-42-79-DA-B8-26, 02-42-55-B8-BC-A1, 02-42-7C-2F-A0-24, 02-42-D1-40-F2-35, 02-42-27-A7-67-12, 02-42-C4-48-86-98, 02-42-77-DA-5F-38, 02-42-C5-8B-85-1C, 02-42-5A-9E-6D-53, 02-42-95-14-90-AF, 02-42-11-55-C6-7C, 02-42-81-13-3E-74, 02-42-D0-A5-28-48, 02-42-0F-CC-72-90, 02-42-9A-5B-A5-C8, 02-42-DE-7C-E5-26] 01-19-2023 05:38:45.049 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:C8-2B-96-10-DD-A9 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default 01-19-2023 05:38:45.049 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:28-6C-07-82-FF-CC is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default 01-19-2023 05:38:45.050 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:B4-E6-2D-79-F2-01 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default 01-19-2023 05:38:45.050 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:84-CC-A8-AD-88-F1 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default 01-19-2023 05:38:45.051 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:84-CC-A8-AD-29-5C is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default 01-19-2023 05:38:45.051 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:F6-1A-DD-17-C6-68 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default 01-19-2023 05:38:45.051 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:24-CE-33-A2-4E-26 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default 01-19-2023 05:38:45.052 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:F4-03-2A-63-73-4E is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default 01-19-2023 05:38:45.052 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:EC-FA-BC-C4-BC-70 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default 01-19-2023 05:38:45.053 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:00-55-DA-5F-3A-53 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default 01-19-2023 05:38:45.053 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:C8-2B-96-10-DF-9D is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default 01-19-2023 05:38:45.054 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:EC-FA-BC-C4-B7-26 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default 01-19-2023 05:38:45.054 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:C8-2B-96-10-D5-2F is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default 01-19-2023 05:38:45.054 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:84-CC-A8-AD-20-E9 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default 01-19-2023 05:38:45.055 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:00-04-20-1B-E2-66 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default 01-19-2023 05:38:45.055 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:24-DF-A7-E8-60-A4 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default 01-19-2023 05:38:45.055 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:DC-54-D7-5D-DA-50 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default 01-19-2023 05:38:45.056 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:3C-71-BF-2C-07-FE is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default 01-19-2023 05:38:45.056 WARN [monitor-topology-pool-14] [] c.t.s.o.c.d.a.n(): Client:00-04-20-1E-7F-59 is not OswClientImage, Cannot modify TrafficRequireMark. omadac:7a6c92d25f861217009ad14b40d49ea1, site:Default 01-19-2023 05:38:48.795 INFO [manage-work-group-15] [] c.t.s.o.m.d.p.t.c(): Device C0-C9-E3-4B-A3-EA OmadacId(7a6c92d25f861217009ad14b40d49ea1) changed to status CONNECTED, which don't need to handle. 01-19-2023 05:38:53.833 INFO [server-comm-pool-0] [] c.t.s.o.m.d.d.m.i.a(): got first inform of OmadacId(7a6c92d25f861217009ad14b40d49ea1) DeviceMac(C0-C9-E3-4B-A3-EA) 01-19-2023 05:38:53.867 INFO [manage-work-group-0] [] c.t.s.o.m.d.d.m.i.d(): first inform send same version config to OmadacId(7a6c92d25f861217009ad14b40d49ea1) DeviceMac(C0-C9-E3-4B-A3-EA) 01-19-2023 05:38:53.913 INFO [manage-work-group-13] [] c.t.s.o.m.d.d.m.d.a.R(): syncConfigurationForSameVersion to OmadacId(7a6c92d25f861217009ad14b40d49ea1) DeviceMac(C0-C9-E3-4B-A3-EA), result:SendDeviceMsgResult(success=true, deviceResponse=BaseConfigRespBody(sequenceId=245, errcode=0, configVersion=null, additionalProperties={})) 01-19-2023 05:39:31.460 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client 24-DF-A7-E8-60-A4 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now. 01-19-2023 05:39:31.512 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client 28-6C-07-82-FF-CC is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now. 01-19-2023 05:39:31.528 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client C8-2B-96-10-DD-A9 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now. 01-19-2023 05:39:31.536 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client C8-2B-96-10-D5-2F is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now. 01-19-2023 05:39:31.541 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client 3C-71-BF-2C-07-FE is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now. 01-19-2023 05:39:31.547 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client 84-CC-A8-AD-88-F1 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now. 01-19-2023 05:39:31.557 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client 00-04-20-1E-7F-59 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now. 01-19-2023 05:39:31.570 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client 00-55-DA-5F-3A-53 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now. 01-19-2023 05:39:31.584 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client F4-03-2A-63-73-4E is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now. 01-19-2023 05:39:31.596 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client 24-CE-33-A2-4E-26 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now. 01-19-2023 05:39:31.604 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client DC-54-D7-5D-DA-50 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now. 01-19-2023 05:39:31.616 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client 84-CC-A8-AD-29-5C is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now. 01-19-2023 05:39:31.627 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client C8-2B-96-10-DF-9D is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now. 01-19-2023 05:39:31.638 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client 00-04-20-1B-E2-66 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now. 01-19-2023 05:39:31.648 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client EC-FA-BC-C4-BC-70 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now. 01-19-2023 05:39:31.659 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client EC-FA-BC-C4-B7-26 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now. 01-19-2023 05:39:31.668 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client B4-E6-2D-79-F2-01 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now. 01-19-2023 05:39:31.677 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client 84-CC-A8-AD-20-E9 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now. 01-19-2023 05:39:31.689 INFO [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Known wireless client F6-1A-DD-17-C6-68 is informed by switch: 90-9A-4A-4A-46-DD, omadac: 7a6c92d25f861217009ad14b40d49ea1, site: Default, handle as wired client now. 01-19-2023 05:40:04.425 WARN [quartzScheduler_Worker-2] [] j.u.c.ThreadPoolExecutor$DiscardPolicy(): FaceBookV2PeriodVerify Schedule ThreadPool is full, Discarding task. 01-19-2023 05:40:20.051 INFO [client-inform-work-group-1] [] c.t.s.c.l.a.AbstractReadWriteLockService(): [readWriteLockService]businessId:omadac.id:7a6c92d25f861217009ad14b40d49ea1:site.id:Default:client.mac:24-DF-A7-E8-60-A4 get writeLock module:client.manager:client fail, execute onFail. 01-19-2023 05:40:20.096 WARN [client-inform-work-group-1] [] c.t.s.o.c.d.a.s(): Failed to refresh informed client: InformedClient(omadacId=7a6c92d25f861217009ad14b40d49ea1, siteId=Default, mac=24-DF-A7-E8-60-A4, hostName=RM4-e8-60-a4, duration=395541, firstSeen=1673711261828, lastSeen=1674106802828, ip=192.168.1.101, clientType=unknown, wireless=false, connectDevMac=90-9A-4A-4A-46-DD, connectDevName=Switch SG2210P, connectDevType=switch, wirelessVid=null, ssid=null, wlanId=null, radioId=null, snr=null, ccq=null, rssi=null, aTime=null, channel=null, rxRate=null, txRate=null, activity=12503, download=437736436, upload=240172865, downloadPacket=1162496, uploadPacket=831996, wifiMode=null, powerSave=null, guest=false, associationTime=null, vid=1, networkName=LAN, port=4, lag=null, dot1x=FREE, dot1xIdentity=null, dot1xVid=0, osName=null, vendor=null, deviceCategory=null). Get client write lock timed-out. 01-19-2023 05:40:20.934 INFO [device-timeout-workgroup-1] [] c.t.s.o.m.d.d.m.f.b(): Device DeviceMac(C0-C9-E3-4B-A3-EA) omadacId OmadacId(7a6c92d25f861217009ad14b40d49ea1) status change from Connected to Heartbeat Missed 01-19-2023 05:40:24.906 WARN [quartzScheduler_Worker-2] [] j.u.c.ThreadPoolExecutor$DiscardPolicy(): FaceBookV2PeriodVerify Schedule ThreadPool is full, Discarding task. 01-19-2023 05:40:38.323 WARN [quartzScheduler_Worker-2] [] j.u.c.ThreadPoolExecutor$DiscardPolicy(): FaceBookV2PeriodVerify Schedule ThreadPool is full, Discarding task. 01-19-2023 05:40:39.895 INFO [quartzScheduler_Worker-1] [] c.t.s.o.c.b.c(): cloud schedule queue is full, discard.

01-19-2023 06:24:04.009 ERROR [manage-work-group-12] [] c.t.s.o.m.p.m.d.o.f(): omadacId 7a6c92d25f861217009ad14b40d49ea1Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27217, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27217, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}] org.springframework.dao.DataAccessResourceFailureException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27217, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27217, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}] at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:95) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at org.springframework.data.mongodb.core.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:2929) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:2865) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:2581) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:2563) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at org.springframework.data.mongodb.core.MongoTemplate.find(MongoTemplate.java:868) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at org.springframework.data.mongodb.core.MongoTemplate.find(MongoTemplate.java:854) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at com.tplink.smb.omada.manager.port.mongo.device.osw.f.a(SourceFile:260) ~[manager-port-mongo-5.7.4.jar:5.7.4] at com.tplink.smb.omada.manager.port.mongo.device.osw.f.a(SourceFile:255) ~[manager-port-mongo-5.7.4.jar:5.7.4] at com.tplink.smb.omada.manager.port.mongo.device.osw.f.a(SourceFile:227) ~[manager-port-mongo-5.7.4.jar:5.7.4] at com.tplink.smb.omada.manager.port.mongo.device.osw.f.a(SourceFile:99) ~[manager-port-mongo-5.7.4.jar:5.7.4] at com.tplink.smb.omada.manager.device.domain.model.f.g.a(SourceFile:95) ~[manager-core-5.7.4.jar:5.7.4] at java.util.Optional.ifPresent(Optional.java:178) ~[?:?] at com.tplink.smb.omada.manager.device.domain.model.f.g.a(SourceFile:89) ~[manager-core-5.7.4.jar:5.7.4] at com.tplink.smb.component.lock.api.AbstractReadWriteLockService.doWithTryLockWrite(AbstractReadWriteLockService.java:107) ~[solution-components-lock-api-1.1.4.jar:1.1.4] at com.tplink.smb.omada.manager.common.c.a.a(SourceFile:80) ~[manager-core-5.7.4.jar:5.7.4] at com.tplink.smb.omada.manager.device.domain.model.f.g.a(SourceFile:85) ~[manager-core-5.7.4.jar:5.7.4] at com.tplink.smb.eventcenter.domain.DefaultDomainEventBus.tryHandle(DefaultDomainEventBus.java:222) ~[eventcenter.domain-1.3.2.jar:1.3.2] at com.tplink.smb.eventcenter.domain.DefaultDomainEventBus.lambda$callSubscribersAsync$2(DefaultDomainEventBus.java:189) ~[eventcenter.domain-1.3.2.jar:1.3.2] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?] at java.lang.Thread.run(Thread.java:833) [?:?] Caused by: com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27217, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}] at com.mongodb.internal.connection.BaseCluster.getDescription(BaseCluster.java:181) ~[mongodb-driver-core-4.4.2.jar:?] at com.mongodb.internal.connection.SingleServerCluster.getDescription(SingleServerCluster.java:44) ~[mongodb-driver-core-4.4.2.jar:?] at com.mongodb.client.internal.MongoClientDelegate.getConnectedClusterDescription(MongoClientDelegate.java:144) ~[mongodb-driver-sync-4.4.2.jar:?] at com.mongodb.client.internal.MongoClientDelegate.createClientSession(MongoClientDelegate.java:101) ~[mongodb-driver-sync-4.4.2.jar:?] at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.getClientSession(MongoClientDelegate.java:291) ~[mongodb-driver-sync-4.4.2.jar:?] at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:183) ~[mongodb-driver-sync-4.4.2.jar:?] at com.mongodb.client.internal.MongoIterableImpl.execute(MongoIterableImpl.java:135) ~[mongodb-driver-sync-4.4.2.jar:?] at com.mongodb.client.internal.MongoIterableImpl.iterator(MongoIterableImpl.java:92) ~[mongodb-driver-sync-4.4.2.jar:?] at org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:2853) ~[spring-data-mongodb-3.3.3.jar:3.3.3] ... 19 more 01-19-2023 06:24:04.021 ERROR [client-device-event-work-group-0] [] c.t.s.e.d.DefaultDomainEventBus(): Event handle error. @.*** org.springframework.dao.DataAccessResourceFailureException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27217, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27217, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}] at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:95) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at org.springframework.data.mongodb.core.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:2929) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:2865) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:2581) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:2563) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at org.springframework.data.mongodb.core.MongoTemplate.find(MongoTemplate.java:868) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at org.springframework.data.mongodb.core.MongoTemplate.find(MongoTemplate.java:854) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at com.tplink.smb.omada.client.port.mongo.omada.client.a.c(SourceFile:255) ~[client-port-mongo-5.7.4.jar:5.7.4] at com.tplink.smb.omada.client.domain.a.r.a(SourceFile:144) ~[client-core-5.7.4.jar:5.7.4] at com.tplink.smb.omada.client.domain.model.clientimage.j.a(SourceFile:35) ~[client-core-5.7.4.jar:5.7.4] at com.tplink.smb.eventcenter.domain.DefaultDomainEventBus.tryHandle(DefaultDomainEventBus.java:222) ~[eventcenter.domain-1.3.2.jar:1.3.2] at com.tplink.smb.eventcenter.domain.DefaultDomainEventBus.callSubscribersSync(DefaultDomainEventBus.java:209) ~[eventcenter.domain-1.3.2.jar:1.3.2] at com.tplink.smb.eventcenter.domain.DefaultDomainEventBus.loopDomainEvent(DefaultDomainEventBus.java:168) ~[eventcenter.domain-1.3.2.jar:1.3.2] at com.tplink.smb.eventcenter.domain.DefaultDomainEventBus.publishSync(DefaultDomainEventBus.java:60) ~[eventcenter.domain-1.3.2.jar:1.3.2] at com.tplink.smb.omada.client.common.port.eventcenter.f.a(SourceFile:83) ~[client-common-5.7.4.jar:5.7.4] at java.util.Optional.ifPresent(Optional.java:178) ~[?:?] at com.tplink.smb.omada.client.common.port.eventcenter.f.handleEvent(SourceFile:64) ~[client-common-5.7.4.jar:5.7.4] at com.tplink.smb.eventcenter.core.DataProcessor.run(DataProcessor.java:31) ~[eventcenter.core-1.3.2.jar:1.3.2] at io.micrometer.core.instrument.internal.TimedRunnable.run(TimedRunnable.java:44) ~[micrometer-core-1.8.4.jar:1.8.4] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?] at java.lang.Thread.run(Thread.java:833) [?:?] Caused by: com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27217, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}] at com.mongodb.internal.connection.BaseCluster.getDescription(BaseCluster.java:181) ~[mongodb-driver-core-4.4.2.jar:?] at com.mongodb.internal.connection.SingleServerCluster.getDescription(SingleServerCluster.java:44) ~[mongodb-driver-core-4.4.2.jar:?] at com.mongodb.client.internal.MongoClientDelegate.getConnectedClusterDescription(MongoClientDelegate.java:144) ~[mongodb-driver-sync-4.4.2.jar:?] at com.mongodb.client.internal.MongoClientDelegate.createClientSession(MongoClientDelegate.java:101) ~[mongodb-driver-sync-4.4.2.jar:?] at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.getClientSession(MongoClientDelegate.java:291) ~[mongodb-driver-sync-4.4.2.jar:?] at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:183) ~[mongodb-driver-sync-4.4.2.jar:?] at com.mongodb.client.internal.MongoIterableImpl.execute(MongoIterableImpl.java:135) ~[mongodb-driver-sync-4.4.2.jar:?] at com.mongodb.client.internal.MongoIterableImpl.iterator(MongoIterableImpl.java:92) ~[mongodb-driver-sync-4.4.2.jar:?] at org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:2853) ~[spring-data-mongodb-3.3.3.jar:3.3.3] ... 21 more 01-19-2023 06:24:04.021 WARN [scheduled-pool-5] [] c.t.s.o.c.p.c.l(): Handle timeout client:ClientImageId(omadacId=7a6c92d25f861217009ad14b40d49ea1, siteId=Default, mac=5C-AD-CF-D7-9D-87) failed with exception. org.springframework.dao.DataAccessResourceFailureException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27217, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27217, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}] at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:95) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at org.springframework.data.mongodb.core.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:2929) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:2865) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:2605) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at org.springframework.data.mongodb.core.ExecutableFindOperationSupport$ExecutableFindSupport.doFind(ExecutableFindOperationSupport.java:220) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at org.springframework.data.mongodb.core.ExecutableFindOperationSupport$ExecutableFindSupport.oneValue(ExecutableFindOperationSupport.java:132) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at org.springframework.data.mongodb.repository.query.AbstractMongoQuery.lambda$getExecution$4(AbstractMongoQuery.java:159) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at org.springframework.data.mongodb.repository.query.AbstractMongoQuery.doExecute(AbstractMongoQuery.java:132) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at org.springframework.data.mongodb.repository.query.AbstractMongoQuery.execute(AbstractMongoQuery.java:107) ~[spring-data-mongodb-3.3.3.jar:3.3.3] at org.springframework.data.repository.core.support.RepositoryMethodInvoker.doInvoke(RepositoryMethodInvoker.java:137) ~[spring-data-commons-2.6.3.jar:2.6.3] at org.springframework.data.repository.core.support.RepositoryMethodInvoker.invoke(RepositoryMethodInvoker.java:121) ~[spring-data-commons-2.6.3.jar:2.6.3] at org.springframework.data.repository.core.support.QueryExecutorMethodInterceptor.doInvoke(QueryExecutorMethodInterceptor.java:159) ~[spring-data-commons-2.6.3.jar:2.6.3] at org.springframework.data.repository.core.support.QueryExecutorMethodInterceptor.invoke(QueryExecutorMethodInterceptor.java:138) ~[spring-data-commons-2.6.3.jar:2.6.3] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.3.18.jar:5.3.18] at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) ~[spring-aop-5.3.18.jar:5.3.18] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.3.18.jar:5.3.18] at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:215) ~[spring-aop-5.3.18.jar:5.3.18] at jdk.proxy2.$Proxy156.findByMacAndSiteIdAndOmadacId(Unknown Source) ~[?:?] at com.tplink.smb.omada.client.port.mongo.omada.client.a.a(SourceFile:77) ~[client-port-mongo-5.7.4.jar:5.7.4] at com.tplink.smb.omada.client.domain.a.r.a(SourceFile:84) ~[client-core-5.7.4.jar:5.7.4] at com.tplink.smb.omada.client.port.cache.l.a(SourceFile:84) ~[client-core-5.7.4.jar:5.7.4] at io.reactivex.internal.observers.LambdaObserver.onNext(LambdaObserver.java:63) ~[rxjava-2.2.18.jar:?] at io.reactivex.internal.operators.observable.ObservableSubscribeOn$SubscribeOnObserver.onNext(ObservableSubscribeOn.java:58) ~[rxjava-2.2.18.jar:?] at io.reactivex.internal.operators.observable.ObservableMap$MapObserver.onNext(ObservableMap.java:62) ~[rxjava-2.2.18.jar:?] at io.reactivex.subjects.PublishSubject$PublishDisposable.onNext(PublishSubject.java:308) ~[rxjava-2.2.18.jar:?] at io.reactivex.subjects.PublishSubject.onNext(PublishSubject.java:228) ~[rxjava-2.2.18.jar:?] at com.tplink.smb.omada.client.port.cache.clientimage.b.b(SourceFile:456) ~[client-port-local-5.7.4.jar:5.7.4] at java.util.concurrent.ConcurrentHashMap$KeySetView.forEach(ConcurrentHashMap.java:4706) ~[?:?] at com.tplink.smb.omada.client.port.cache.clientimage.b.d(SourceFile:431) ~[client-port-local-5.7.4.jar:5.7.4] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[?:?] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) ~[?:?] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?] at java.lang.Thread.run(Thread.java:833) [?:?] Caused by: com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27217, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}] at com.mongodb.internal.connection.BaseCluster.getDescription(BaseCluster.java:181) ~[mongodb-driver-core-4.4.2.jar:?] at com.mongodb.internal.connection.SingleServerCluster.getDescription(SingleServerCluster.java:44) ~[mongodb-driver-core-4.4.2.jar:?] at com.mongodb.client.internal.MongoClientDelegate.getConnectedClusterDescription(MongoClientDelegate.java:144) ~[mongodb-driver-sync-4.4.2.jar:?] at com.mongodb.client.internal.MongoClientDelegate.createClientSession(MongoClientDelegate.java:101) ~[mongodb-driver-sync-4.4.2.jar:?] at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.getClientSession(MongoClientDelegate.java:291) ~[mongodb-driver-sync-4.4.2.jar:?] at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:183) ~[mongodb-driver-sync-4.4.2.jar:?] at com.mongodb.client.internal.MongoIterableImpl.execute(MongoIterableImpl.java:135) ~[mongodb-driver-sync-4.4.2.jar:?] at com.mongodb.client.internal.MongoIterableImpl.iterator(MongoIterableImpl.java:92) ~[mongodb-driver-sync-4.4.2.jar:?] at org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:2853) ~[spring-data-mongodb-3.3.3.jar:3.3.3] ... 32 more

— Reply to this email directly, view it on GitHub https://github.com/mbentley/docker-omada-controller/issues/268#issuecomment-1396716672, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALDDKFXRPWEDWFP2KVADAMLWTEGNLANCNFSM6AAAAAASXL6GIA . You are receiving this because you authored the thread.Message ID: @.***>

mbentley commented 1 year ago

I can't seem to understand what that chromium version is or does... Apart from that it seems to solve this issue, what is it for?

The Chromium version doesn't do anything outside of adding Chromium by extending the base image. So if you're on 5.7 and you use the Chromium tag to 5.7, it has chromium-browser which is required to generate pdf reports since I would assume TP-Link uses a library which requires a page to be rendered in a browser and then "printed" (or saved) as a pdf.

I'll have to take a look at the logs you shared to see if there is anything that sticks out to me at all but to be honest, many of the outputs from the logs don't really make sense to me without being able to see what is happening in the code exactly.

*edit: so for whatever reason, the heartbeat is failing between the controller and the device and it seems like whenever it comes back, it's getting re-adopted and re-provisioned but then it's continuing the have a heartbeat failure. Then the next set of logs, it looks like the controller isn't able to connect to MongoDB. Is that something that happens when the controller is starting up or is it during normal operation? Makes me wonder if the MongoDB process is dying/being killed for some reason. Anything in the mongodb logs?

tophee commented 1 year ago

I still don't really understand the Chromium thing, but since I don't need to generate any pdfs, I guess I don't need it. And I don't really see how it could possibly solve the issue here.

it looks like the controller isn't able to connect to MongoDB. Is that something that happens when the controller is starting up or is it during normal operation?

That is during normal operation. I searched the logs for mongodb and found that these errors are all over the place. I'd almost say that it is a mere coincidence that it was in the log at the same time as the device disconnect.

At the end of december, there were days completely littered with the mongodb issue, but now it hasn't occurred since the ones I posted above. So maybe it fixed itself at some point when I restarted the container? No, docker ps tells me that it has been running for three weeks. So the restart three weeks ago probably fixed the problem that was filling the logs in december, but it's not completely fixed because it occasionally occurred also after the last restart.

Anything in the mongodb logs?

I am not sure what to look for, so I searched for error Most of the occurrences are of the type

2023-01-19T05:15:38.302+0000 I COMMAND  [conn850] command admin.$cmd command: getLastError { getlasterror: 1, $db: "admin" } numYields:0 reslen:79 locks:{} protocol:op_msg 953ms

and I assume that is irrelevant.

The only three real errors are these (with some context before and after:

2023-01-19T06:23:32.633+0000 I NETWORK  [conn935] received client metadata from 127.0.0.1:59846 conn935: { driver: { name: "mongo-java-driver|sync|spring-boot", version: "4.4.2" }, os: { type: "Linux", name: "Linux", architecture: "amd64", version: "5.15.39-3-pve" }, platform: "Java/Private Build/17.0.5+8-Ubuntu-2ubuntu120.04" }
2023-01-19T06:23:33.291+0000 I WRITE    [LogicalSessionCacheRefresh] update config.system.sessions command: { q: { _id: { id: UUID("d3617bd5-dc81-475e-98bc-737d18e0bf62"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) } }, u: { $currentDate: { lastUse: true } }, multi: false, upsert: true } planSummary: IDHACK keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keysInserted:1 keysDeleted:1 numYields:2 locks:{ Global: { acquireCount: { r: 3, w: 3 } }, Database: { acquireCount: { w: 3 } }, Collection: { acquireCount: { w: 3 } } } 446317ms
2023-01-19T06:24:14.462+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:59850 #937 (5 connections now open)
2023-01-19T06:24:14.462+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:59852 #938 (6 connections now open)
2023-01-19T06:24:14.462+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:59854 #939 (7 connections now open)
2023-01-19T06:24:14.462+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:59856 #940 (8 connections now open)
2023-01-19T06:24:14.462+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:59858 #941 (9 connections now open)
2023-01-19T06:24:14.463+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:59860 #942 (10 connections now open)
2023-01-19T06:24:14.465+0000 I NETWORK  [conn933] Error sending response to client: SocketException: Broken pipe. Ending connection from 127.0.0.1:59842 (connection id: 933)
2023-01-19T06:24:14.465+0000 I NETWORK  [conn933] end connection 127.0.0.1:59842 (9 connections now open)
2023-01-19T06:24:14.473+0000 I COMMAND  [conn934] command admin.$cmd command: isMaster { isMaster: 1, helloOk: true, client: { driver: { name: "mongo-java-driver|sync|spring-boot", version: "4.4.2" }, os: { type: "Linux", name: "Linux", architecture: "amd64", version: "5.15.39-3-pve" }, platform: "Java/Private Build/17.0.5+8-Ubuntu-2ubuntu120.04" }, $db: "admin" } numYields:0 reslen:223 locks:{} protocol:op_query 275506ms
2023-01-19T06:24:14.473+0000 I NETWORK  [conn934] Error sending response to client: SocketException: Broken pipe. Ending connection from 127.0.0.1:59844 (connection id: 934)
2023-01-19T06:24:14.473+0000 I NETWORK  [conn934] end connection 127.0.0.1:59844 (8 connections now open)
2023-01-19T06:24:14.473+0000 I COMMAND  [conn935] command admin.$cmd command: isMaster { isMaster: 1, helloOk: true, client: { driver: { name: "mongo-java-driver|sync|spring-boot", version: "4.4.2" }, os: { type: "Linux", name: "Linux", architecture: "amd64", version: "5.15.39-3-pve" }, platform: "Java/Private Build/17.0.5+8-Ubuntu-2ubuntu120.04" }, $db: "admin" } numYields:0 reslen:223 locks:{} protocol:op_query 89729ms
2023-01-19T06:24:14.473+0000 I NETWORK  [conn935] Error sending response to client: SocketException: Broken pipe. Ending connection from 127.0.0.1:59846 (connection id: 935)
2023-01-19T06:24:14.473+0000 I NETWORK  [conn935] end connection 127.0.0.1:59846 (7 connections now open)
2023-01-19T06:24:14.473+0000 I NETWORK  [conn936] received client metadata from 127.0.0.1:59848 conn936: { driver: { name: "mongo-java-driver|sync|spring-boot", version: "4.4.2" }, os: { type: "Linux", name: "Linux", architecture: "amd64", version: "5.15.39-3-pve" }, platform: "Java/Private Build/17.0.5+8-Ubuntu-2ubuntu120.04" }
2023-01-19T06:24:14.473+0000 I NETWORK  [conn939] received client metadata from 127.0.0.1:59854 conn939: { driver: { name: "mongo-java-driver|sync|spring-boot", version: "4.4.2" }, os: { type: "Linux", name: "Linux", architecture: "amd64", version: "5.15.39-3-pve" }, platform: "Java/Private Build/17.0.5+8-Ubuntu-2ubuntu120.04" }
2023-01-19T06:24:14.474+0000 I NETWORK  [conn936] end connection 127.0.0.1:59848 (6 connections now open)
2023-01-19T06:24:14.474+0000 I NETWORK  [conn938] received client metadata from 127.0.0.1:59852 conn938: { driver: { name: "mongo-java-driver|sync|spring-boot", version: "4.4.2" }, os: { type: "Linux", name: "Linux", architecture: "amd64", version: "5.15.39-3-pve" }, platform: "Java/Private Build/17.0.5+8-Ubuntu-2ubuntu120.04" }
2023-01-19T06:24:14.474+0000 I NETWORK  [conn939] end connection 127.0.0.1:59854 (5 connections now open)
mbentley commented 1 year ago

Sorry, I over-explained the Chromium thing. Chromium is just the purely open source version of Google Chrome and as you said, not relevant unless you're generating pdfs.

Just to check to see if there is anything that sticks out from my own logs, I checked to see if I have those sorts of disconnects and errors to see if they're errors but something that is seen regularly but I am not seeing those in mine. I'm also regularly hitting my controller to talk to the APIs to get client data via Telegraf so I have a pretty active system despite only having 3 APs.

I'm probably at the end of my ability to debug anything further as I am not sure that this is a problem related to packaging in a container and it would probably require more in depth knowledge about the controller than I have.

tophee commented 1 year ago

OK, thanks a lot for looking into this. I guess I'll just leave it and hope for TP-Link to eventually fix it (if there is anything broken).

If I could ask you a quick question about the permissions of the log files. I noticed that while the server.log has 644 permissions (on host) but mongod.log has 600, which made it difficult for me to access it. Is there any reason why the permissions are set like this?

(Totally off-topic, but if you are curious about ChatGPT, here is my conversation with it, trying to figure out how to get access to mongod.log earlier today. Since all my questions in that conversation are genuine, I cannot evaluate how good a "teacher" ChatGPT was in this case, but in areas where I am more knowledgeable, I have found it interesting to explore the limitations of the bot to better understand how it works and from that, I know, that it does make some grave mistakes sometimes. So if you share any similar interest in this new technology, take a look at the conversation. Otherwise, just ignore it. ;-)

mbentley commented 1 year ago

Hmm, not really sure why the logs would have different permissions set unless it's something about how the permissions are set on the parent directory maybe. I don't believe I'm doing anything within the image itself isn't setting the permissions so it must be the controller and mongodb itself that are setting the permissions upon creation. Oddly enough, mine are 644 for both:

# ls -la | grep -v "log.gz"
total 4888
drwxr-xr-x 2 omada omada       96 Jan 22 00:00 ./
drwxr-x--- 5 omada omada        6 Nov 14 09:06 ../
-rw-r--r-- 1 omada omada 32694061 Jan 22 01:15 mongod.log
-rw-r--r-- 1 omada omada   109493 Jan 22 08:00 server.log
mbentley commented 11 months ago

Closing as it seems to be an upstream issue; may no longer be relevant. Reopen if there is something I can do.