Open resiehnnes opened 2 years ago
@resiehnnes ,
could you perform a simple test?
$ sudo service opensnitch stop
$ sudo mv /etc/opensnitchd/opensnitch-procs.o sudo mv /etc/opensnitchd/opensnitch-procs.o.old
$ sudo service opensnitch start
launch some flatpaks and see if the delay is gone.
I'll try to reproduce it locally.
The second command gave me a warning 'mv: target '/etc/opensnitchd/opensnitch-procs.o.old' is not a directory', but if I got it right I had to rename 'opensnitch-procs.o' to 'opensnitch-procs.o.old', so I did it but unfortunately problem is still there (Firefox) with loading times.
oops, sorry, I typed the command wrong. Yes, you got it right, thank you!
ok, so it's not a problem of the new ebpf module. Did you experience this problem with previous versions v1.5.x?
Yes, I had this problem with 1.5.x as well, so I thought it might got fixed in 1.6.x and installed it when it hit rc2 build, but unfortunately the problem is still there. One more thing I noticed, opening links from any applications, like a homepage in 'About' section has the same delay (up to a minute) before Firefox 'catches' requested link and opens it.
ok, thank you for reporting this then :), I'll investigate it
I can't reproduce it. Please, follow the next steps to keep analyzing this issue:
flatpak run org.mozilla.firefox
? or do you launch it in a different way?)mmh, there's something suspicious:
[2022-08-04 09:42:40] new connection tcp => 53:84.208.20.110 -> 10.107.114.238:51574 uid: 4294967295
[2022-08-04 09:42:40] ebpf warning: eBPF packet with unknown source IP: 84.208.20.110
This appears repetitively until openweathermap.org
is called (1' later), maybe it's when the firefox window shows up?
There should be a rule (table inet-filter-input udp sport 53 queue flags bypass to 0
) to intercept these DNS responses, obtain the resolved domain name, and accept the connection.
Could you list and post inet-filter-input table rules? $ sudo nft list chain inet filter input
On the other hand, there're no logs connecting to domains, only to IPs:
[2022-08-04 09:45:12] /app/lib/firefox/firefox-bin -> 140.82.121.6:443
Are you using systemd-resolved's DNSSEC / DNSOverTLS feature or firefox's DNS over HTTPS?
This appears repetitively until openweathermap.org is called (1' later), maybe it's when the firefox window shows up?
I copied log file right after Firefox showed up, so it should be the last minute in log file (around 9:45 mark)
Could you list and post inet-filter-input table rules?
I am getting an error..
sudo nft list chain inet filter input
Error: No such file or directory
list chain inet filter input
^^^^^
Are you using systemd-resolved's DNSSEC / DNSOverTLS feature or firefox's DNS over HTTPS?
Indeed, I am using DNS over HTTPS in Firefox, + I forgot to inform you that I am using system wide VPN client running 24/7 (Mullvad with official GUI)
P.S. I removed uploaded log file for privacy concerns, but if you need it or a new one I will gladly provide it.
Ok, I think we are getting somewhere. I tried to close VPN client (thus breaking VPN connection) and 'voilà' Firefox opened up without any delays. So after I switched on VPN client and tried to re-launch Firefox couple of times but this time there's no delay and it opens fine.
I don't know if this is Mullvad causing those issues, I am running latest version '2022.2', but there is '2022.3 beta 3' out, so who knows, I will try it out and report back.
So updating Mullvad to latest beta version didn't magically solved the delay issue with Firefox, but after further testing I found out something worth mentioning:
Firefox shows no delay when opened if: Mullvad is disconnected from VPN server and I am connected straight to my ISP. Mullvad's 'Kill switch' is turned off.
Firefox does have delay when opened if: Mullvad is connected to VPN server.
Firefox does have delay when opened if: Mullvad is disconnected from VPN server, but this time Mullvad's 'Kill switch' is turned on and there's no connection to my ISP.
In every scenario OpenSnitch was enabled and running and Mullvad's client app was running in background despite different settings.
thank you @resiehnnes for this information, it's really useful. I'll try to reproduce it with the 3rd point, although my Mullvad client doesn't have the kill switch.
Could you post your routing table + nftables ruleset while Mullvad is connected to the VPN? (ip r; nft list ruleset
)
No worries, glad to help you identify this problem :) So, here is what I got in terminal:
ip r
default via 192.168.0.1 dev wlp5s0 proto dhcp src 192.168.0.122 metric 600
10.64.0.1 dev wg-mullvad proto static
192.168.0.0/24 dev wlp5s0 proto kernel scope link src 192.168.0.122 metric 600
sudo nft list ruleset
table inet firewalld {
ct helper helper-netbios-ns-udp {
type "netbios-ns" protocol udp
l3proto ip
}
chain mangle_PREROUTING {
type filter hook prerouting priority mangle + 10; policy accept;
jump mangle_PREROUTING_ZONES
}
chain mangle_PREROUTING_POLICIES_pre {
jump mangle_PRE_policy_allow-host-ipv6
}
chain mangle_PREROUTING_ZONES {
iifname "wlp5s0" goto mangle_PRE_FedoraWorkstation
goto mangle_PRE_FedoraWorkstation
}
chain mangle_PREROUTING_POLICIES_post {
}
chain nat_PREROUTING {
type nat hook prerouting priority dstnat + 10; policy accept;
jump nat_PREROUTING_ZONES
}
chain nat_PREROUTING_POLICIES_pre {
jump nat_PRE_policy_allow-host-ipv6
}
chain nat_PREROUTING_ZONES {
iifname "wlp5s0" goto nat_PRE_FedoraWorkstation
goto nat_PRE_FedoraWorkstation
}
chain nat_PREROUTING_POLICIES_post {
}
chain nat_POSTROUTING {
type nat hook postrouting priority srcnat + 10; policy accept;
jump nat_POSTROUTING_ZONES
}
chain nat_POSTROUTING_POLICIES_pre {
}
chain nat_POSTROUTING_ZONES {
oifname "wlp5s0" goto nat_POST_FedoraWorkstation
goto nat_POST_FedoraWorkstation
}
chain nat_POSTROUTING_POLICIES_post {
}
chain filter_PREROUTING {
type filter hook prerouting priority filter + 10; policy accept;
icmpv6 type { nd-router-advert, nd-neighbor-solicit } accept
meta nfproto ipv6 fib saddr . mark . iif oif missing drop
}
chain filter_INPUT {
type filter hook input priority filter + 10; policy accept;
ct state { established, related } accept
ct status dnat accept
iifname "lo" accept
jump filter_INPUT_ZONES
ct state invalid drop
reject with icmpx admin-prohibited
}
chain filter_FORWARD {
type filter hook forward priority filter + 10; policy accept;
ct state { established, related } accept
ct status dnat accept
iifname "lo" accept
ip6 daddr { ::/96, ::ffff:0.0.0.0/96, 2002::/24, 2002:a00::/24, 2002:7f00::/24, 2002:a9fe::/32, 2002:ac10::/28, 2002:c0a8::/32, 2002:e000::/19 } reject with icmpv6 addr-unreachable
jump filter_FORWARD_ZONES
ct state invalid drop
reject with icmpx admin-prohibited
}
chain filter_OUTPUT {
type filter hook output priority filter + 10; policy accept;
ct state { established, related } accept
oifname "lo" accept
ip6 daddr { ::/96, ::ffff:0.0.0.0/96, 2002::/24, 2002:a00::/24, 2002:7f00::/24, 2002:a9fe::/32, 2002:ac10::/28, 2002:c0a8::/32, 2002:e000::/19 } reject with icmpv6 addr-unreachable
jump filter_OUTPUT_POLICIES_pre
jump filter_OUTPUT_POLICIES_post
}
chain filter_INPUT_POLICIES_pre {
jump filter_IN_policy_allow-host-ipv6
}
chain filter_INPUT_ZONES {
iifname "wlp5s0" goto filter_IN_FedoraWorkstation
goto filter_IN_FedoraWorkstation
}
chain filter_INPUT_POLICIES_post {
}
chain filter_FORWARD_POLICIES_pre {
}
chain filter_FORWARD_ZONES {
iifname "wlp5s0" goto filter_FWD_FedoraWorkstation
goto filter_FWD_FedoraWorkstation
}
chain filter_FORWARD_POLICIES_post {
}
chain filter_OUTPUT_POLICIES_pre {
}
chain filter_OUTPUT_POLICIES_post {
}
chain filter_IN_FedoraWorkstation {
jump filter_INPUT_POLICIES_pre
jump filter_IN_FedoraWorkstation_pre
jump filter_IN_FedoraWorkstation_log
jump filter_IN_FedoraWorkstation_deny
jump filter_IN_FedoraWorkstation_allow
jump filter_IN_FedoraWorkstation_post
jump filter_INPUT_POLICIES_post
meta l4proto { icmp, ipv6-icmp } accept
reject with icmpx admin-prohibited
}
chain filter_IN_FedoraWorkstation_pre {
}
chain filter_IN_FedoraWorkstation_log {
}
chain filter_IN_FedoraWorkstation_deny {
}
chain filter_IN_FedoraWorkstation_allow {
ip6 daddr fe80::/64 udp dport 546 ct state { new, untracked } accept
tcp dport 22 ct state { new, untracked } accept
udp dport 137 ct helper set "helper-netbios-ns-udp"
udp dport 137 ct state { new, untracked } accept
udp dport 138 ct state { new, untracked } accept
ip daddr 224.0.0.251 udp dport 5353 ct state { new, untracked } accept
ip6 daddr ff02::fb udp dport 5353 ct state { new, untracked } accept
udp dport 1025-65535 ct state { new, untracked } accept
tcp dport 1025-65535 ct state { new, untracked } accept
}
chain filter_IN_FedoraWorkstation_post {
}
chain nat_POST_FedoraWorkstation {
jump nat_POSTROUTING_POLICIES_pre
jump nat_POST_FedoraWorkstation_pre
jump nat_POST_FedoraWorkstation_log
jump nat_POST_FedoraWorkstation_deny
jump nat_POST_FedoraWorkstation_allow
jump nat_POST_FedoraWorkstation_post
jump nat_POSTROUTING_POLICIES_post
}
chain nat_POST_FedoraWorkstation_pre {
}
chain nat_POST_FedoraWorkstation_log {
}
chain nat_POST_FedoraWorkstation_deny {
}
chain nat_POST_FedoraWorkstation_allow {
}
chain nat_POST_FedoraWorkstation_post {
}
chain filter_FWD_FedoraWorkstation {
jump filter_FORWARD_POLICIES_pre
jump filter_FWD_FedoraWorkstation_pre
jump filter_FWD_FedoraWorkstation_log
jump filter_FWD_FedoraWorkstation_deny
jump filter_FWD_FedoraWorkstation_allow
jump filter_FWD_FedoraWorkstation_post
jump filter_FORWARD_POLICIES_post
reject with icmpx admin-prohibited
}
chain filter_FWD_FedoraWorkstation_pre {
}
chain filter_FWD_FedoraWorkstation_log {
}
chain filter_FWD_FedoraWorkstation_deny {
}
chain filter_FWD_FedoraWorkstation_allow {
}
chain filter_FWD_FedoraWorkstation_post {
}
chain nat_PRE_FedoraWorkstation {
jump nat_PREROUTING_POLICIES_pre
jump nat_PRE_FedoraWorkstation_pre
jump nat_PRE_FedoraWorkstation_log
jump nat_PRE_FedoraWorkstation_deny
jump nat_PRE_FedoraWorkstation_allow
jump nat_PRE_FedoraWorkstation_post
jump nat_PREROUTING_POLICIES_post
}
chain nat_PRE_FedoraWorkstation_pre {
}
chain nat_PRE_FedoraWorkstation_log {
}
chain nat_PRE_FedoraWorkstation_deny {
}
chain nat_PRE_FedoraWorkstation_allow {
}
chain nat_PRE_FedoraWorkstation_post {
}
chain mangle_PRE_FedoraWorkstation {
jump mangle_PREROUTING_POLICIES_pre
jump mangle_PRE_FedoraWorkstation_pre
jump mangle_PRE_FedoraWorkstation_log
jump mangle_PRE_FedoraWorkstation_deny
jump mangle_PRE_FedoraWorkstation_allow
jump mangle_PRE_FedoraWorkstation_post
jump mangle_PREROUTING_POLICIES_post
}
chain mangle_PRE_FedoraWorkstation_pre {
}
chain mangle_PRE_FedoraWorkstation_log {
}
chain mangle_PRE_FedoraWorkstation_deny {
}
chain mangle_PRE_FedoraWorkstation_allow {
}
chain mangle_PRE_FedoraWorkstation_post {
}
chain filter_IN_policy_allow-host-ipv6 {
jump filter_IN_policy_allow-host-ipv6_pre
jump filter_IN_policy_allow-host-ipv6_log
jump filter_IN_policy_allow-host-ipv6_deny
jump filter_IN_policy_allow-host-ipv6_allow
jump filter_IN_policy_allow-host-ipv6_post
}
chain filter_IN_policy_allow-host-ipv6_pre {
}
chain filter_IN_policy_allow-host-ipv6_log {
}
chain filter_IN_policy_allow-host-ipv6_deny {
}
chain filter_IN_policy_allow-host-ipv6_allow {
icmpv6 type nd-neighbor-advert accept
icmpv6 type nd-neighbor-solicit accept
icmpv6 type nd-router-advert accept
icmpv6 type nd-redirect accept
}
chain filter_IN_policy_allow-host-ipv6_post {
}
chain nat_PRE_policy_allow-host-ipv6 {
jump nat_PRE_policy_allow-host-ipv6_pre
jump nat_PRE_policy_allow-host-ipv6_log
jump nat_PRE_policy_allow-host-ipv6_deny
jump nat_PRE_policy_allow-host-ipv6_allow
jump nat_PRE_policy_allow-host-ipv6_post
}
chain nat_PRE_policy_allow-host-ipv6_pre {
}
chain nat_PRE_policy_allow-host-ipv6_log {
}
chain nat_PRE_policy_allow-host-ipv6_deny {
}
chain nat_PRE_policy_allow-host-ipv6_allow {
}
chain nat_PRE_policy_allow-host-ipv6_post {
}
chain mangle_PRE_policy_allow-host-ipv6 {
jump mangle_PRE_policy_allow-host-ipv6_pre
jump mangle_PRE_policy_allow-host-ipv6_log
jump mangle_PRE_policy_allow-host-ipv6_deny
jump mangle_PRE_policy_allow-host-ipv6_allow
jump mangle_PRE_policy_allow-host-ipv6_post
}
chain mangle_PRE_policy_allow-host-ipv6_pre {
}
chain mangle_PRE_policy_allow-host-ipv6_log {
}
chain mangle_PRE_policy_allow-host-ipv6_deny {
}
chain mangle_PRE_policy_allow-host-ipv6_allow {
}
chain mangle_PRE_policy_allow-host-ipv6_post {
}
}
table inet filter {
}
table inet nat {
}
table inet mangle {
chain output {
type filter hook output priority mangle; policy accept;
icmp type { echo-reply, echo-request } accept
icmpv6 type { echo-request, echo-reply } accept
}
}
table inet mullvad {
chain prerouting {
type filter hook prerouting priority -199; policy accept;
iif != "wg-mullvad" ct mark 0x00000f41 meta mark set 0x6d6f6c65
ip saddr 169.150.201.2 udp sport 49333 meta mark set 0x6d6f6c65
}
chain output {
type filter hook output priority filter; policy drop;
oif "lo" accept
ct mark 0x00000f41 accept
udp sport 68 ip daddr 255.255.255.255 udp dport 67 accept
ip6 saddr fe80::/10 udp sport 546 ip6 daddr ff02::1:2 udp dport 547 accept
ip6 saddr fe80::/10 udp sport 546 ip6 daddr ff05::1:3 udp dport 547 accept
ip6 daddr ff02::2 icmpv6 type nd-router-solicit icmpv6 code no-route accept
ip6 daddr ff02::1:ff00:0/104 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
ip6 daddr fe80::/10 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
ip6 daddr fe80::/10 icmpv6 type nd-neighbor-advert icmpv6 code no-route accept
ip daddr 169.150.201.2 udp dport 49333 meta mark 0x6d6f6c65 accept
oif "wg-mullvad" udp dport 53 ip daddr 10.64.0.1 accept
oif "wg-mullvad" tcp dport 53 ip daddr 10.64.0.1 accept
udp dport 53 reject
tcp dport 53 reject with tcp reset
oif "wg-mullvad" accept
reject
}
chain input {
type filter hook input priority filter; policy drop;
iif "lo" accept
ct mark 0x00000f41 accept
udp sport 67 udp dport 68 accept
ip6 saddr fe80::/10 udp sport 547 ip6 daddr fe80::/10 udp dport 546 accept
ip6 saddr fe80::/10 icmpv6 type nd-router-advert icmpv6 code no-route accept
ip6 saddr fe80::/10 icmpv6 type nd-redirect icmpv6 code no-route accept
ip6 saddr fe80::/10 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
icmpv6 type nd-neighbor-advert icmpv6 code no-route accept
ip saddr 169.150.201.2 udp sport 49333 ct state established accept
iif "wg-mullvad" accept
}
chain forward {
type filter hook forward priority filter; policy drop;
udp sport 68 ip daddr 255.255.255.255 udp dport 67 accept
udp sport 67 udp dport 68 accept
ip6 saddr fe80::/10 udp sport 546 ip6 daddr ff02::1:2 udp dport 547 accept
ip6 saddr fe80::/10 udp sport 546 ip6 daddr ff05::1:3 udp dport 547 accept
ip6 saddr fe80::/10 udp sport 547 ip6 daddr fe80::/10 udp dport 546 accept
ip6 daddr ff02::2 icmpv6 type nd-router-solicit icmpv6 code no-route accept
ip6 saddr fe80::/10 icmpv6 type nd-router-advert icmpv6 code no-route accept
ip6 saddr fe80::/10 icmpv6 type nd-redirect icmpv6 code no-route accept
ip6 daddr ff02::1:ff00:0/104 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
ip6 daddr fe80::/10 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
ip6 saddr fe80::/10 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
ip6 daddr fe80::/10 icmpv6 type nd-neighbor-advert icmpv6 code no-route accept
icmpv6 type nd-neighbor-advert icmpv6 code no-route accept
oif "wg-mullvad" udp dport 53 ip daddr 10.64.0.1 accept
oif "wg-mullvad" tcp dport 53 ip daddr 10.64.0.1 accept
udp dport 53 reject
tcp dport 53 reject with tcp reset
oif "wg-mullvad" accept
iif "wg-mullvad" ct state established accept
reject
}
}
table ip mullvadmangle4 {
chain mangle {
type route hook output priority mangle; policy accept;
oif "wg-mullvad" udp dport 53 ip daddr 10.64.0.1 accept
oif "wg-mullvad" tcp dport 53 ip daddr 10.64.0.1 accept
meta cgroup 5087041 ct mark set 0x00000f41 meta mark set 0x6d6f6c65
}
chain nat {
type nat hook postrouting priority srcnat; policy accept;
oif "wg-mullvad" ct mark 0x00000f41 drop
oif != "lo" ct mark 0x00000f41 masquerade
}
}
table ip6 mullvadmangle6 {
chain mangle {
type route hook output priority mangle; policy accept;
meta cgroup 5087041 ct mark set 0x00000f41 meta mark set 0x6d6f6c65
}
chain nat {
type nat hook postrouting priority srcnat; policy accept;
oif "wg-mullvad" ct mark 0x00000f41 drop
oif != "lo" ct mark 0x00000f41 masquerade
}
}
hey @resiehnnes !
I think I found a few workarounds for this problem. If you still uses opensnitch, try setting DNS= to DNS=1.1.1.1
(for example) in /etc/systemd/resolved.conf
, then restart the service ($sudo systemctl restart systemd-resolved
).
I'll commit the other workaround soon.
Greetings @gustavo-iniguez-goya , I tried your solution together with OpenSnitch 1.6rc3, and I still have same issue.
I think I experience a similar issue. Opening a Link from Thunderbird (Flatpak) to Firefox (Flatpak) takes up to 30s, which didnt occur before installing the Firewall afaik. I am on the latest stable release though, question: How unstable are the new ones?
I think this problem affects all versions @trytomakeyouprivate . I never managed to debug/reproduce it.
Note that the original issue used a VPN with others firewall rules, which could interfere with opensnitch.
If you're not using a VPN + flatpaks, open a new issue describing the problem (be sure to verify that without opensnitch there's no delay). Set log level to DEBUG under Preferences->Nodes, reproduce the problem, then post the log file /var/log/opensnitchd.log
hey, it'd be worth testing if this issue keeps happening with latest v1.6.1. There have been a couple of changes that could have helped to solve this problem.
Some applications are taking long time (1-2 min) to launch. It happens very often with flatpak Firefox (103.0.1). Same problem happens to flatpak Steam and launching a game. If I disable OpenSnitch during those moments when applications refuse to load they load right away after disabling OpenSnitch. Have a feeling it has something to do with flatpak applications, I never had these problems before I upgraded to Fedora 36 and started to use Firefox as a flatpak.
Let me know if you need any logs to investigate the problem.