docker / for-linux

Docker Engine for Linux
https://docs.docker.com/engine/installation/
755 stars 85 forks source link

Docker crashes under a dialup connection/interface/ppp #1281

Open dElogics opened 3 years ago

dElogics commented 3 years ago

I've a dial up Internet connection using a USB modem. When that connection is up docker fails to start; begins to "stopping healthcheck following graceful shutdown". As soon as the dialup connection is stopped (and consequently the interface is removed), docker successfully starts. Interface name: ppp0

If docker has already started and the dial up connection starts, then docker continues.

Output of docker version:

Client:
 Version:           20.10.0-dev
 API version:       1.41
 Go version:        go1.16.6
 Git commit:        f0df35096d5f5e6b559b42c7fde6c65a2909f7c5
 Built:             Mon Aug  2 18:51:04 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server:
 Engine:
  Version:          20.10.7
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.6
  Git commit:       8728dd246c
  Built:            Mon Aug  2 21:49:28 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.8
  GitCommit:        7eba5930496d9bbe375fdf71603e610ad737d2b2
 runc:
  Version:          1.0.0
  GitCommit:        84113eef6fc27af1b01b3181f31bbaf708715301

Output of docker info:

Client:
 Context:    default
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 1
 Server Version: 20.10.7
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: false
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7eba5930496d9bbe375fdf71603e610ad737d2b2
 runc version: 84113eef6fc27af1b01b3181f31bbaf708715301
 init version: N/A (expected: de40ad007797e0dcd8b7126f27bb87401d224240)
 Security Options:
  cgroupns
 Kernel Version: 5.10.55-gentoo-gentoo
 Operating System: Gentoo/Linux
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 15.56GiB
 Name: desktopminer
 ID: HAWT:A6KY:5F5S:K5ZV:W4NZ:AC6J:S22P:LKEQ:CFTF:6PVZ:6PKN:5F4N
 Docker Root Dir: /home/de/small/docker
 Debug Mode: true
  File Descriptors: 24
  Goroutines: 40
  System Time: 2021-08-08T16:26:39.157831828+05:30
  EventsListeners: 0
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No cpuset support
WARNING: No io.weight support
WARNING: No io.weight (per device) support
WARNING: No io.max (rbps) support
WARNING: No io.max (wbps) support
WARNING: No io.max (riops) support
WARNING: No io.max (wiops) support
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

This's a desktop machine.

/usr/bin/dockerd --debug --storage-driver=overlay2 --iptables=false --data-root=/docker
INFO[2021-08-08T16:10:38.385999460+00:00] Starting up                                  
DEBU[2021-08-08T16:10:38.386377056+00:00] Listener created for HTTP on unix (/var/run/docker.sock) 
DEBU[2021-08-08T16:10:38.386391837+00:00] Containerd not running, starting daemon managed containerd 
INFO[2021-08-08T16:10:38.386839258+00:00] libcontainerd: started new containerd process  pid=1863
INFO[2021-08-08T16:10:38.386885397+00:00] parsed scheme: "unix"                         module=grpc
INFO[2021-08-08T16:10:38.386898736+00:00] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2021-08-08T16:10:38.386933198+00:00] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2021-08-08T16:10:38.386944379+00:00] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2021-08-08T16:10:38.394988056+00:00] starting containerd                           revision=7eba5930496d9bbe375fdf71603e610ad737d2b2 version=1.4.8
INFO[2021-08-08T16:10:38.415342364+00:00] loading plugin "io.containerd.content.v1.content"...  type=io.containerd.content.v1
INFO[2021-08-08T16:10:38.415416020+00:00] loading plugin "io.containerd.snapshotter.v1.aufs"...  type=io.containerd.snapshotter.v1
INFO[2021-08-08T16:10:38.416482478+00:00] skip loading plugin "io.containerd.snapshotter.v1.aufs"...  error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.55-gentoo-gentoo\\n\"): skip plugin" type=io.containerd.snapshotter.v1
INFO[2021-08-08T16:10:38.416525043+00:00] loading plugin "io.containerd.snapshotter.v1.native"...  type=io.containerd.snapshotter.v1
INFO[2021-08-08T16:10:38.416548407+00:00] loading plugin "io.containerd.snapshotter.v1.overlayfs"...  type=io.containerd.snapshotter.v1
INFO[2021-08-08T16:10:38.416689676+00:00] loading plugin "io.containerd.snapshotter.v1.zfs"...  type=io.containerd.snapshotter.v1
INFO[2021-08-08T16:10:38.416845908+00:00] skip loading plugin "io.containerd.snapshotter.v1.zfs"...  error="path /docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2021-08-08T16:10:38.416862924+00:00] loading plugin "io.containerd.metadata.v1.bolt"...  type=io.containerd.metadata.v1
INFO[2021-08-08T16:10:38.416888050+00:00] metadata content store policy set             policy=shared
INFO[2021-08-08T16:10:38.416962328+00:00] loading plugin "io.containerd.differ.v1.walking"...  type=io.containerd.differ.v1
INFO[2021-08-08T16:10:38.416998449+00:00] loading plugin "io.containerd.gc.v1.scheduler"...  type=io.containerd.gc.v1
INFO[2021-08-08T16:10:38.417027742+00:00] loading plugin "io.containerd.service.v1.introspection-service"...  type=io.containerd.service.v1
INFO[2021-08-08T16:10:38.417051956+00:00] loading plugin "io.containerd.service.v1.containers-service"...  type=io.containerd.service.v1
INFO[2021-08-08T16:10:38.417065103+00:00] loading plugin "io.containerd.service.v1.content-service"...  type=io.containerd.service.v1
INFO[2021-08-08T16:10:38.417077419+00:00] loading plugin "io.containerd.service.v1.diff-service"...  type=io.containerd.service.v1
INFO[2021-08-08T16:10:38.417094463+00:00] loading plugin "io.containerd.service.v1.images-service"...  type=io.containerd.service.v1
INFO[2021-08-08T16:10:38.417109323+00:00] loading plugin "io.containerd.service.v1.leases-service"...  type=io.containerd.service.v1
INFO[2021-08-08T16:10:38.417129758+00:00] loading plugin "io.containerd.service.v1.namespaces-service"...  type=io.containerd.service.v1
INFO[2021-08-08T16:10:38.417143647+00:00] loading plugin "io.containerd.service.v1.snapshots-service"...  type=io.containerd.service.v1
INFO[2021-08-08T16:10:38.417156095+00:00] loading plugin "io.containerd.runtime.v1.linux"...  type=io.containerd.runtime.v1
INFO[2021-08-08T16:10:38.417251260+00:00] loading plugin "io.containerd.runtime.v2.task"...  type=io.containerd.runtime.v2
INFO[2021-08-08T16:10:38.417328899+00:00] loading plugin "io.containerd.monitor.v1.cgroups"...  type=io.containerd.monitor.v1
INFO[2021-08-08T16:10:38.417608062+00:00] loading plugin "io.containerd.service.v1.tasks-service"...  type=io.containerd.service.v1
INFO[2021-08-08T16:10:38.417642765+00:00] loading plugin "io.containerd.internal.v1.restart"...  type=io.containerd.internal.v1
INFO[2021-08-08T16:10:38.417697065+00:00] loading plugin "io.containerd.grpc.v1.containers"...  type=io.containerd.grpc.v1
INFO[2021-08-08T16:10:38.417748802+00:00] loading plugin "io.containerd.grpc.v1.content"...  type=io.containerd.grpc.v1
INFO[2021-08-08T16:10:38.417771113+00:00] loading plugin "io.containerd.grpc.v1.diff"...  type=io.containerd.grpc.v1
INFO[2021-08-08T16:10:38.417785355+00:00] loading plugin "io.containerd.grpc.v1.events"...  type=io.containerd.grpc.v1
INFO[2021-08-08T16:10:38.417797700+00:00] loading plugin "io.containerd.grpc.v1.healthcheck"...  type=io.containerd.grpc.v1
INFO[2021-08-08T16:10:38.417810045+00:00] loading plugin "io.containerd.grpc.v1.images"...  type=io.containerd.grpc.v1
INFO[2021-08-08T16:10:38.417821767+00:00] loading plugin "io.containerd.grpc.v1.leases"...  type=io.containerd.grpc.v1
INFO[2021-08-08T16:10:38.417833791+00:00] loading plugin "io.containerd.grpc.v1.namespaces"...  type=io.containerd.grpc.v1
INFO[2021-08-08T16:10:38.417845210+00:00] loading plugin "io.containerd.internal.v1.opt"...  type=io.containerd.internal.v1
INFO[2021-08-08T16:10:38.417879196+00:00] loading plugin "io.containerd.grpc.v1.snapshots"...  type=io.containerd.grpc.v1
INFO[2021-08-08T16:10:38.417922394+00:00] loading plugin "io.containerd.grpc.v1.tasks"...  type=io.containerd.grpc.v1
INFO[2021-08-08T16:10:38.417935843+00:00] loading plugin "io.containerd.grpc.v1.version"...  type=io.containerd.grpc.v1
INFO[2021-08-08T16:10:38.417947431+00:00] loading plugin "io.containerd.grpc.v1.introspection"...  type=io.containerd.grpc.v1
INFO[2021-08-08T16:10:38.418107450+00:00] serving...                                    address=/var/run/docker/containerd/containerd-debug.sock
INFO[2021-08-08T16:10:38.418143292+00:00] serving...                                    address=/var/run/docker/containerd/containerd.sock.ttrpc
INFO[2021-08-08T16:10:38.418178010+00:00] serving...                                    address=/var/run/docker/containerd/containerd.sock
DEBU[2021-08-08T16:10:38.418192806+00:00] sd notification                               error="<nil>" notified=false state="READY=1"
INFO[2021-08-08T16:10:38.418204062+00:00] containerd successfully booted in 0.023703s  
DEBU[2021-08-08T16:10:38.428199098+00:00] Created containerd monitoring client          address=/var/run/docker/containerd/containerd.sock
DEBU[2021-08-08T16:10:38.428945001+00:00] Started daemon managed containerd            
DEBU[2021-08-08T16:10:38.429327235+00:00] Golang's threads limit set to 114570         
INFO[2021-08-08T16:10:38.429499287+00:00] parsed scheme: "unix"                         module=grpc
INFO[2021-08-08T16:10:38.429512766+00:00] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2021-08-08T16:10:38.429527637+00:00] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2021-08-08T16:10:38.429553261+00:00] ClientConn switching balancer to "pick_first"  module=grpc
DEBU[2021-08-08T16:10:38.429598079+00:00] metrics API listening on /var/run/docker/metrics.sock 
INFO[2021-08-08T16:10:38.430120864+00:00] parsed scheme: "unix"                         module=grpc
INFO[2021-08-08T16:10:38.430133079+00:00] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2021-08-08T16:10:38.430145273+00:00] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2021-08-08T16:10:38.430157323+00:00] ClientConn switching balancer to "pick_first"  module=grpc
DEBU[2021-08-08T16:10:38.430530028+00:00] Using default logging driver json-file       
DEBU[2021-08-08T16:10:38.430541207+00:00] [graphdriver] trying provided driver: overlay2 
DEBU[2021-08-08T16:10:38.430603059+00:00] processing event stream                       module=libcontainerd namespace=plugins.moby
DEBU[2021-08-08T16:10:38.461021853+00:00] backingFs=extfs, projectQuotaSupported=false, indexOff="index=off,", userxattr=""  storage-driver=overlay2
DEBU[2021-08-08T16:10:38.461076516+00:00] Initialized graph driver overlay2            
DEBU[2021-08-08T16:10:38.462321337+00:00] No quota support for local volumes in /docker/volumes: Filesystem does not support, or has not enabled quotas 
DEBU[2021-08-08T16:10:38.549175241+00:00] garbage collected                             d=31.097887ms
WARN[2021-08-08T16:10:38.566458650+00:00] Unable to find io controller                 
WARN[2021-08-08T16:10:38.566479424+00:00] Unable to find cpuset controller             
WARN[2021-08-08T16:10:38.566485987+00:00] Unable to find pids controller               
DEBU[2021-08-08T16:10:38.566651999+00:00] Max Concurrent Downloads: 3                  
DEBU[2021-08-08T16:10:38.566661716+00:00] Max Concurrent Uploads: 5                    
DEBU[2021-08-08T16:10:38.566677017+00:00] Max Download Attempts: 5                     
INFO[2021-08-08T16:10:38.566690337+00:00] Loading containers: start.                   
DEBU[2021-08-08T16:10:38.566730247+00:00] Option Experimental: false                   
DEBU[2021-08-08T16:10:38.566758298+00:00] Option DefaultDriver: bridge                 
DEBU[2021-08-08T16:10:38.566764537+00:00] Option DefaultNetwork: bridge                
DEBU[2021-08-08T16:10:38.566770614+00:00] Network Control Plane MTU: 1500              
DEBU[2021-08-08T16:10:38.566874502+00:00] processing event stream                       module=libcontainerd namespace=moby
WARN[2021-08-08T16:10:38.567741078+00:00] Could not load necessary modules for IPSEC rules: protocol not supported 
INFO[2021-08-08T16:10:38.569486777+00:00] stopping healthcheck following graceful shutdown  module=libcontainerd
INFO[2021-08-08T16:10:38.569515062+00:00] stopping event stream following graceful shutdown  error="context canceled" module=libcontainerd namespace=moby
INFO[2021-08-08T16:10:38.569529492+00:00] stopping event stream following graceful shutdown  error="context canceled" module=libcontainerd namespace=plugins.moby
DEBU[2021-08-08T16:10:38.569570702+00:00] received signal                               signal=terminated
DEBU[2021-08-08T16:10:38.569630812+00:00] sd notification                               error="<nil>" notified=false state="STOPPING=1"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x5594c73b8d7a]

goroutine 1 [running]:
github.com/docker/docker/vendor/github.com/vishvananda/netlink.parseAddr(0xc000218384, 0x38, 0x38, 0x0, 0xc000a95778, 0x4, 0x280, 0x0, 0x0, 0x0, ...)
        /tmp/portage/app-emulation/docker-20.10.7/work/docker-20.10.7/src/github.com/docker/docker/vendor/github.com/vishvananda/netlink/addr_linux.go:274 +0x21a
github.com/docker/docker/vendor/github.com/vishvananda/netlink.(*Handle).AddrList(0xc000a9ac60, 0x5594c8d03bb0, 0xc0003f17a0, 0x2, 0x0, 0x0, 0x0, 0xc0000109d0, 0x0)
        /tmp/portage/app-emulation/docker-20.10.7/work/docker-20.10.7/src/github.com/docker/docker/vendor/github.com/vishvananda/netlink/addr_linux.go:199 +0x226
github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge.(*bridgeInterface).addresses(0xc000ac2060, 0x0, 0x0, 0xa30000000000008, 0x29, 0xc0009c9c50, 0x203000, 0x8, 0xc000ac2120)
        /tmp/portage/app-emulation/docker-20.10.7/work/docker-20.10.7/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge/interface.go:57 +0x52
github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge.setupBridgeIPv4(0xc000ab68f0, 0xc000ac2060, 0x0, 0x0)
        /tmp/portage/app-emulation/docker-20.10.7/work/docker-20.10.7/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge/setup_ipv4.go:31 +0xc5
github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge.(*bridgeSetup).apply(0xc000a6c708, 0xc000a9ad10, 0x2)
        /tmp/portage/app-emulation/docker-20.10.7/work/docker-20.10.7/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge/setup.go:17 +0x7c
github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge.(*driver).createNetwork(0xc00037d580, 0xc000ab68f0, 0x0, 0x0)
        /tmp/portage/app-emulation/docker-20.10.7/work/docker-20.10.7/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge/bridge.go:809 +0x7e6
github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge.(*driver).populateNetworks(0xc00037d580, 0x5, 0x5594c8340f92)
        /tmp/portage/app-emulation/docker-20.10.7/work/docker-20.10.7/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge/bridge_store.go:62 +0x29e
github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge.(*driver).initStore(0xc00037d580, 0xc000aab830, 0x0, 0x5594c8796040)
        /tmp/portage/app-emulation/docker-20.10.7/work/docker-20.10.7/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge/bridge_store.go:35 +0x226
github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge.(*driver).configure(0xc00037d580, 0xc000aab830, 0x5594c88e1ce0, 0xc000aab4a0)
        /tmp/portage/app-emulation/docker-20.10.7/work/docker-20.10.7/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge/bridge.go:439 +0x24b
github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge.Init(0x5594c8d028c8, 0xc0006b94c0, 0xc000aab830, 0xc000aab830, 0x0)
        /tmp/portage/app-emulation/docker-20.10.7/work/docker-20.10.7/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge/bridge.go:169 +0xa5
github.com/docker/docker/vendor/github.com/docker/libnetwork/drvregistry.(*DrvRegistry).AddDriver(...)
        /tmp/portage/app-emulation/docker-20.10.7/work/docker-20.10.7/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/drvregistry/drvregistry.go:72
github.com/docker/docker/vendor/github.com/docker/libnetwork.New(0xc00037d500, 0x9, 0x10, 0xc0004d9890, 0xc000aab440, 0xc00037d500, 0x9)
        /tmp/portage/app-emulation/docker-20.10.7/work/docker-20.10.7/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/controller.go:221 +0x5d1
github.com/docker/docker/daemon.(*Daemon).initNetworkController(0xc00000c1e0, 0xc0001ccb00, 0xc000aab440, 0x0, 0x0, 0x0, 0x0)
        /tmp/portage/app-emulation/docker-20.10.7/work/docker-20.10.7/src/github.com/docker/docker/daemon/daemon_unix.go:855 +0xac
github.com/docker/docker/daemon.(*Daemon).restore(0xc00000c1e0, 0xc0006b8480, 0xc00078c1c0)
        /tmp/portage/app-emulation/docker-20.10.7/work/docker-20.10.7/src/github.com/docker/docker/daemon/daemon.go:490 +0x52c
github.com/docker/docker/daemon.NewDaemon(0x5594c8d25a90, 0xc0006b8480, 0xc0001ccb00, 0xc0004d9890, 0x0, 0x0, 0x0)
        /tmp/portage/app-emulation/docker-20.10.7/work/docker-20.10.7/src/github.com/docker/docker/daemon/daemon.go:1147 +0x2c1d
main.(*DaemonCli).start(0xc00075d7a0, 0xc000220de0, 0x0, 0x0)
        /tmp/portage/app-emulation/docker-20.10.7/work/docker-20.10.7/src/github.com/docker/docker/cmd/dockerd/daemon.go:195 +0x785
main.runDaemon(...)
        /tmp/portage/app-emulation/docker-20.10.7/work/docker-20.10.7/src/github.com/docker/docker/cmd/dockerd/docker_unix.go:13
main.newDaemonCommand.func1(0xc000142b00, 0xc000aed2c0, 0x0, 0x4, 0x0, 0x0)
        /tmp/portage/app-emulation/docker-20.10.7/work/docker-20.10.7/src/github.com/docker/docker/cmd/dockerd/docker.go:34 +0x7d
github.com/docker/docker/vendor/github.com/spf13/cobra.(*Command).execute(0xc000142b00, 0xc00004e0b0, 0x4, 0x4, 0xc000142b00, 0xc00004e0b0)
        /tmp/portage/app-emulation/docker-20.10.7/work/docker-20.10.7/src/github.com/docker/docker/vendor/github.com/spf13/cobra/command.go:850 +0x472
github.com/docker/docker/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000142b00, 0x0, 0x0, 0x10)
        /tmp/portage/app-emulation/docker-20.10.7/work/docker-20.10.7/src/github.com/docker/docker/vendor/github.com/spf13/cobra/command.go:958 +0x375
github.com/docker/docker/vendor/github.com/spf13/cobra.(*Command).Execute(...)
        /tmp/portage/app-emulation/docker-20.10.7/work/docker-20.10.7/src/github.com/docker/docker/vendor/github.com/spf13/cobra/command.go:895
main.main()
        /tmp/portage/app-emulation/docker-20.10.7/work/docker-20.10.7/src/github.com/docker/docker/cmd/dockerd/docker.go:97 +0x185
delogicsreal commented 3 years ago

And yes -- it works when the connection is PPPoE (DLS)

cpuguy83 commented 3 years ago

It looks like this panic is fixed in the netlink lib, we just need to update.

LittleFox94 commented 2 years ago

Same crash when connected to WWAN with ppp0 interface

Version info ``` Server: Docker Engine - Community Engine: Version: 20.10.10 API version: 1.41 (minimum version 1.12) Go version: go1.16.9 Git commit: e2f740d Built: Mon Oct 25 07:41:26 2021 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.4.11 GitCommit: 5b46e404f6b9f661a205e28d59c982d3634148f8 runc: Version: 1.0.2 GitCommit: v1.0.2-0-g52b36a2 docker-init: Version: 0.19.0 GitCommit: de40ad0 ```
dElogics commented 2 years ago

I made some changes to the modem stack (added a kernel driver which enables ModemManager to use the qmi protocol, as a result now I also receive an IPv6 -- dual stack) and now the issue does not persist. Interface name is now changed to wwan0.

LittleFox94 commented 2 years ago

I made some changes to the modem stack (added a kernel driver which enables ModemManager to use the qmi protocol, as a result now I also receive an IPv6 -- dual stack) and now the issue does not persist. Interface name is now changed to wwan0.

yep, with qmi you do not have a ppp interface, which seems to somehow be the problem

TheElixZammuto commented 2 years ago

Same problem also happens when an l2tp VPN is intialized inside the host using network-manager-l2tp