Open andrewdavidwong opened 1 year ago
Still happening in Qubes R4.2 with Whonix 17?
To me it did not happen in Qubes R4.1 with Whonix 16, but started to happen in Qubes 4.2 with Whonix 17 after in-place system upgrade. I update templates from the UI, and encounter this issue at first attempt, and the second attempt is successful.
I am not sure this issue can make progress as is.
The way it is written it might create the impression being primarily a Whonix issue and "Whonix issue, they should investigate and fix this". But...
Similar Could not connect to 127.0.0.1 errors occasionally also happen with non-Whonix Debian templates, but with nowhere near the same frequency. It's observed with plain Debian maybe 10-20% of the time and with Whonix maybe 90% of the time.
As long as this is an issue with Debian, that needs to be investigated and fixed by Qubes first. Because any issues that happen in Debian could happen in Whonix too but then because of some weird reason such as a race condition happening more often. There might be nothing for Whonix to fix.
Therefore it might be better to split this into 1) a Qubes Debian template and 2) a Qubes-Whonix issue?
On https://www.whonix.org/wiki/Qubes/UpdatesProxy I am collecting all information on how to debug Qubes UpdatesProxy.
New chapter added just now: Qubes UpdatesProxy running test
The following command is useful to run before attempting to upgrade and also useful to run after downloading upgrades failed. It checks if qubes-updates-proxy-forwarder.socket
is even running. In Template:
sudo systemctl --no-pager -l status qubes-updates-proxy-forwarder.socket
Another new chapter added just now: Qubes UpdatesProxy reachability check
Another command useful to run before and after an upgrade attempt. In Template:
Could you please run the following command:
curl 127.0.0.1:8082
And then please compare if it shows the expected output as mentioned in above link.
In Whonix (or Kicksecure) could you also run:
In Template.
systemcheck
Or.
systemcheck --verbose
As it also checks for some Qubes UpdatesProxy issues.
All of this debug output is also useful in Debian Template (except it does not have systemcheck).
Please confirm you have the exact same error message Could not connect to 127.0.0.1:8082 (127.0.0.1). - connect (113: No route to host)
. @vlad-timofeev For other issues please use different tickets if none exist already for these error messages.
- Similar
Could not connect to 127.0.0.1
errors occasionally also happen with non-Whonix Debian templates,
Are they configured to use sys-whonix for updates too? If not, then what they use as updates proxy?
I have done some testing, and below are the results, plus more detailed information about my setup and the observed behavior, which is admittedly different from the one originally reported.
This is the error log:
Updating whonix-gateway-17
Fail to fetch InRelease: tor+https://deb.whonix.org bookworm InRelease from tor+https://deb.whonix.org/dists/bookworm/InRelease
Fail to fetch InRelease: https://deb.qubes-os.org/r4.2/vm bookworm InRelease from https://deb.qubes-os.org/r4.2/vm/dists/bookworm/InRelease
Fail to fetch InRelease: tor+https://fasttrack.debian.net/debian bookworm-fasttrack InRelease from tor+https://fasttrack.debian.net/debian/dists/bookworm-fasttrack/InRelease
Fail to fetch InRelease: tor+https://deb.kicksecure.com bookworm InRelease from tor+https://deb.kicksecure.com/dists/bookworm/InRelease
Fail to fetch InRelease: tor+https://deb.debian.org/debian bookworm InRelease from tor+https://deb.debian.org/debian/dists/bookworm/InRelease
Fail to fetch InRelease: tor+https://deb.debian.org/debian bookworm-updates InRelease from tor+https://deb.debian.org/debian/dists/bookworm-updates/InRelease
Fail to fetch InRelease: tor+https://deb.debian.org/debian-security bookworm-security InRelease from tor+https://deb.debian.org/debian-security/dists/bookworm-security/InRelease
Fail to fetch InRelease: tor+https://deb.debian.org/debian bookworm-backports InRelease from tor+https://deb.debian.org/debian/dists/bookworm-backports/InRelease
Fail to fetch InRelease: tor+https://deb.whonix.org bookworm InRelease from tor+https://deb.whonix.org/dists/bookworm/InRelease
Fail to fetch InRelease: tor+https://fasttrack.debian.net/debian bookworm-fasttrack InRelease from tor+https://fasttrack.debian.net/debian/dists/bookworm-fasttrack/InRelease
Fail to fetch InRelease: tor+https://deb.kicksecure.com bookworm InRelease from tor+https://deb.kicksecure.com/dists/bookworm/InRelease
Fail to fetch InRelease: tor+https://deb.debian.org/debian bookworm InRelease from tor+https://deb.debian.org/debian/dists/bookworm/InRelease
Fail to fetch InRelease: https://deb.qubes-os.org/r4.2/vm bookworm InRelease from https://deb.qubes-os.org/r4.2/vm/dists/bookworm/InRelease
Fail to fetch InRelease: tor+https://deb.debian.org/debian bookworm-updates InRelease from tor+https://deb.debian.org/debian/dists/bookworm-updates/InRelease
Fail to fetch InRelease: tor+https://deb.debian.org/debian-security bookworm-security InRelease from tor+https://deb.debian.org/debian-security/dists/bookworm-security/InRelease
Fail to fetch InRelease: tor+https://deb.debian.org/debian bookworm-backports InRelease from tor+https://deb.debian.org/debian/dists/bookworm-backports/InRelease
Fail to fetch InRelease: tor+https://deb.whonix.org bookworm InRelease from tor+https://deb.whonix.org/dists/bookworm/InRelease
Fail to fetch InRelease: tor+https://fasttrack.debian.net/debian bookworm-fasttrack InRelease from tor+https://fasttrack.debian.net/debian/dists/bookworm-fasttrack/InRelease
Fail to fetch InRelease: tor+https://deb.kicksecure.com bookworm InRelease from tor+https://deb.kicksecure.com/dists/bookworm/InRelease
Fail to fetch InRelease: tor+https://deb.debian.org/debian bookworm InRelease from tor+https://deb.debian.org/debian/dists/bookworm/InRelease
Fail to fetch InRelease: tor+https://deb.debian.org/debian bookworm-updates InRelease from tor+https://deb.debian.org/debian/dists/bookworm-updates/InRelease
Fail to fetch InRelease: https://deb.qubes-os.org/r4.2/vm bookworm InRelease from https://deb.qubes-os.org/r4.2/vm/dists/bookworm/InRelease
Fail to fetch InRelease: tor+https://deb.debian.org/debian-security bookworm-security InRelease from tor+https://deb.debian.org/debian-security/dists/bookworm-security/InRelease
Fail to fetch InRelease: tor+https://deb.debian.org/debian bookworm-backports InRelease from tor+https://deb.debian.org/debian/dists/bookworm-backports/InRelease
Fail to fetch InRelease: tor+https://deb.whonix.org bookworm InRelease from tor+https://deb.whonix.org/dists/bookworm/InRelease
Fail to fetch InRelease: tor+https://fasttrack.debian.net/debian bookworm-fasttrack InRelease from tor+https://fasttrack.debian.net/debian/dists/bookworm-fasttrack/InRelease
Fail to fetch InRelease: tor+https://deb.kicksecure.com bookworm InRelease from tor+https://deb.kicksecure.com/dists/bookworm/InRelease
Fail to fetch InRelease: tor+https://deb.debian.org/debian bookworm InRelease from tor+https://deb.debian.org/debian/dists/bookworm/InRelease
Fail to fetch InRelease: tor+https://deb.debian.org/debian bookworm-updates InRelease from tor+https://deb.debian.org/debian/dists/bookworm-updates/InRelease
Fail to fetch InRelease: https://deb.qubes-os.org/r4.2/vm bookworm InRelease from https://deb.qubes-os.org/r4.2/vm/dists/bookworm/InRelease
Fail to fetch InRelease: tor+https://deb.debian.org/debian-security bookworm-security InRelease from tor+https://deb.debian.org/debian-security/dists/bookworm-security/InRelease
Fail to fetch InRelease: tor+https://deb.debian.org/debian bookworm-backports InRelease from tor+https://deb.debian.org/debian/dists/bookworm-backports/InRelease
E:Failed to fetch tor+https://deb.debian.org/debian/dists/bookworm/InRelease Could not connect to 127.0.0.1:8082 (127.0.0.1). - connect (113: No route to host), E:Failed to fetch tor+https://deb.debian.org/debian/dists/bookworm-updates/InRelease Unable to connect to 127.0.0.1:8082:, E:Failed to fetch tor+https://deb.debian.org/debian-security/dists/bookworm-security/InRelease Unable to connect to 127.0.0.1:8082:, E:Failed to fetch tor+https://deb.debian.org/debian/dists/bookworm-backports/InRelease Unable to connect to 127.0.0.1:8082:, E:Failed to fetch tor+https://fasttrack.debian.net/debian/dists/bookworm-fasttrack/InRelease Could not connect to 127.0.0.1:8082 (127.0.0.1). - connect (113: No route to host), E:Failed to fetch tor+https://deb.kicksecure.com/dists/bookworm/InRelease Could not connect to 127.0.0.1:8082 (127.0.0.1). - connect (113: No route to host), E:Failed to fetch tor+https://deb.whonix.org/dists/bookworm/InRelease Could not connect to 127.0.0.1:8082 (127.0.0.1). - connect (113: No route to host), E:Failed to fetch https://deb.qubes-os.org/r4.2/vm/dists/bookworm/InRelease Could not connect to 127.0.0.1:8082 (127.0.0.1). - connect (113: No route to host), E:Some index files failed to download. They have been ignored, or old ones used instead.
whonix-gateway-17
and whonix-workstation-17
templates. Those two templates are configured to use sys-whonix
for updates, whereas the non-whonix templates are configured to use another proxy VM.Qubes OS Update
UI right after boot and checking the two whonix templates.whonix-gateway-17
or whonix-workstation-17
after the boot and before the update attempt.Below is the output of the suggested commands. Note that in accordance with the above statements the update does not fail after I run these commands (because I have to start the template to get it).
[template workstation user ~]% sudo systemctl --no-pager -l status qubes-updates-proxy-forwarder.socket
● qubes-updates-proxy-forwarder.socket - Forward connection to updates proxy over Qubes RPC
Loaded: loaded (/lib/systemd/system/qubes-updates-proxy-forwarder.socket; enabled; preset: enabled)
Active: active (listening) since Fri 2023-12-29 16:32:04 UTC; 30s ago
Listen: 127.0.0.1:8082 (Stream)
Accepted: 1; Connected: 0;
Tasks: 0 (limit: 385)
Memory: 4.0K
CPU: 866us
CGroup: /system.slice/qubes-updates-proxy-forwarder.socket
Dec 29 16:32:04 host systemd[1]: Listening on qubes-updates-proxy-forwarder.socket - Forward connection to updates proxy over Qubes RPC.
[template workstation user ~]% curl 127.0.0.1:8082
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
<head>
<title>403 Filtered</title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<meta name="application-name" content="tor proxy"/>
</head>
<body>
<h1>tor proxy</h1>
This customized file /usr/share/tinyproxy/default.html is stored on on a Tor proxy.
<h1>Filtered</h1>
<p>The request you made has been filtered</p>
<hr />
<p><em>Generated by <a href="https://tinyproxy.github.io/">tinyproxy</a> version 1.11.1.</em></p>
</body>
</html>
[template workstation user ~]% systemcheck --verbose
[INFO] [systemcheck] whonix-workstation-17 | Whonix-Workstation | TemplateVM | Fri Dec 29 04:33:20 PM UTC 2023
[INFO] [systemcheck] Check sudo Result: OK
[INFO] [systemcheck] Whonix build version: 3:10.3-1
[INFO] [systemcheck] whonix-workstation-packages-dependencies-cli: 24.0-1
[INFO] [systemcheck] derivative_major_release_version /etc/whonix_version: 17
[INFO] [systemcheck] Whonix Support Status of this Major Version: Ok.
[WARNING] [systemcheck] Hardened Malloc: Disabled.
[INFO] [systemcheck] Spectre Meltdown Test: skipping since spectre_meltdown_check=false, ok.
[INFO] [systemcheck] Package Manager Consistency Check Result: Output of command dpkg --audit was empty, ok.
[INFO] [systemcheck] systemd journal check Result:
AppArmor:
########################################
########################################
secomp:
########################################
########################################
warnings:
########################################
Dec 29 16:32:03 host kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 29 16:32:08 host qubes.StartApp+xfce4-terminal-dom0[1087]: (xfce4-terminal:1094): dbind-WARNING **: 16:32:08.118: AT-SPI: Error retrieving accessibility bus address: org.freedesktop.DBus.Error.ServiceUnknown: The name org.a11y.Bus was not provided by any .service files
Dec 29 16:32:13 host qubes.StartApp+xfce4-terminal-dom0[1087]: (xfce4-terminal:1094): Gdk-WARNING **: 16:32:13.809: ../../../gdk/x11/gdkproperty-x11.c:224 invalid X atom: 292245498
########################################
failed:
########################################
Dec 29 16:32:03 host kernel: tsc: Fast TSC calibration failed
Dec 29 16:32:03 host kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 29 16:32:04 host audit: CONFIG_CHANGE op=set audit_failure=1 old=1 auid=4294967295 ses=4294967295 subj=unconfined res=1
Dec 29 16:32:04 host augenrules[547]: failure 1
Dec 29 16:32:04 host augenrules[547]: failure 1
Dec 29 16:32:04 host augenrules[547]: failure 1
Dec 29 16:32:04 host xl[590]: libxl: error: libxl_utils.c:815:libxl_cpu_bitmap_alloc: failed to retrieve the maximum number of cpus
Dec 29 16:32:04 host xl[590]: libxl: error: libxl_utils.c:815:libxl_cpu_bitmap_alloc: failed to retrieve the maximum number of cpus
Dec 29 16:32:04 host xl[590]: libxl: error: libxl_utils.c:815:libxl_cpu_bitmap_alloc: failed to retrieve the maximum number of cpus
Dec 29 16:32:05 host pulseaudio[739]: XOpenDisplay() failed
Dec 29 16:32:05 host pulseaudio[739]: Failed to load module "module-x11-publish" (argument: ""): initialization failed.
Dec 29 16:32:07 host loginctl[814]: Could not attach device: Failed to open device '/sys/devices/platform/pcspkr/input/*': No such device
Dec 29 16:32:07 host dbus-daemon[948]: [session uid=1000 pid=946] Activated service 'org.freedesktop.systemd1' failed: Process org.freedesktop.systemd1 exited with status 1
Dec 29 16:32:07 host systemctl[1042]: Failed to connect to bus: No medium found
Dec 29 16:32:08 host qubes.StartApp+xfce4-terminal-dom0[1087]: (xfce4-terminal:1094): Gdk-CRITICAL **: 16:32:08.111: gdk_atom_intern: assertion 'atom_name != NULL' failed
Dec 29 16:32:08 host qubes.StartApp+xfce4-terminal-dom0[1087]: Failed to connect to session manager: Failed to connect to the session manager: SESSION_MANAGER environment variable not defined
Dec 29 16:33:21 host sudo[2248]: user : TTY=pts/0 ; PWD=/home/user ; USER=root ; COMMAND=/bin/systemctl --no-pager --no-block --no-legend --failed
########################################
errors:
########################################
Dec 29 16:32:03 host kernel: ACPI Error: No handler or method for GPE 00, disabling event (20230628/evgpe-839)
Dec 29 16:32:03 host kernel: ACPI Error: No handler or method for GPE 01, disabling event (20230628/evgpe-839)
Dec 29 16:32:03 host kernel: ACPI Error: No handler or method for GPE 03, disabling event (20230628/evgpe-839)
Dec 29 16:32:03 host kernel: ACPI Error: No handler or method for GPE 04, disabling event (20230628/evgpe-839)
Dec 29 16:32:03 host kernel: ACPI Error: No handler or method for GPE 05, disabling event (20230628/evgpe-839)
Dec 29 16:32:03 host kernel: ACPI Error: No handler or method for GPE 06, disabling event (20230628/evgpe-839)
Dec 29 16:32:03 host kernel: ACPI Error: No handler or method for GPE 07, disabling event (20230628/evgpe-839)
Dec 29 16:32:03 host kernel: hid_bpf: error while preloading HID BPF dispatcher: -22
Dec 29 16:32:03 host kernel: RAS: Correctable Errors collector initialized.
Dec 29 16:32:04 host xl[590]: libxl: error: libxl_utils.c:815:libxl_cpu_bitmap_alloc: failed to retrieve the maximum number of cpus
Dec 29 16:32:04 host xl[590]: libxl: error: libxl_utils.c:815:libxl_cpu_bitmap_alloc: failed to retrieve the maximum number of cpus
Dec 29 16:32:04 host xl[590]: libxl: error: libxl_utils.c:815:libxl_cpu_bitmap_alloc: failed to retrieve the maximum number of cpus
Dec 29 16:32:04 host sdwdate-pre[570]: + gcc /usr/src/sdwdate/sclockadj.c -o /usr/libexec/sdwdate/sclockadj -ldl -D_GNU_SOURCE -Wdate-time -D_FORTIFY_SOURCE=3 -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wl,-z,relro -Wl,-z,now
Dec 29 16:32:05 host pulseaudio[739]: Unable to contact D-Bus: org.freedesktop.DBus.Error.NotSupported: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11
Dec 29 16:32:05 host pulseaudio[739]: Unable to contact D-Bus: org.freedesktop.DBus.Error.NotSupported: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11
Dec 29 16:32:08 host qubes.StartApp+xfce4-terminal-dom0[1087]: (xfce4-terminal:1094): dbind-WARNING **: 16:32:08.118: AT-SPI: Error retrieving accessibility bus address: org.freedesktop.DBus.Error.ServiceUnknown: The name org.a11y.Bus was not provided by any .service files
########################################
ordering cycle:
########################################
########################################
To see this for yourself...
1. Open a terminal. (dom0 -> Start Menu -> Template: whonix-workstation-17 -> Terminal)
3. Run. sudo apparmor-info --boot
3. Run. sudo journalctl --boot | grep -i syscall=
4. Run. sudo journalctl --boot | grep -i warn
5. Run. sudo journalctl --boot | grep -i fail
6. Run. sudo journalctl --boot | grep -i error
7. Run. sudo journalctl --boot | grep -i "ordering cycle"
If you know what you are doing, feel free to disable this check.
Create a file /etc/systemcheck.d/50_user.conf and add:
systemcheck_skip_functions+=" check_journal "
[INFO] [systemcheck] Qubes Settings Test Result: Ok, qubes_vm_type is TemplateVM.
[INFO] [systemcheck] Check Kernel Messages Test Result: Found nothing remarkable, ok.
[INFO] [systemcheck] Whonix firewall systemd unit check Result: Ok.
[INFO] [systemcheck] Check Package Manager Running Result: None running, ok.
[INFO] [systemcheck] Tor Check Result: Skipped, because running in TemplateVM, ok.
[INFO] [systemcheck] Tor Config Check Result: Tor config ok.
[INFO] [systemcheck] Tor Running Check Result: Not running on Whonix-Gateway, ok.
[INFO] [systemcheck] Whonix Meta Packages Test Result: Meta package qubes-whonix-workstation installed, ok.
[INFO] [systemcheck] Whonix Unwanted Packages Test Result: None found.
[INFO] [systemcheck] Check Initializer Result: /var/lib/initializer-dist/status-files/first_run_initializer.fail does not exist, ok.
[INFO] [systemcheck] Check Virtualizer Result: Supported Virtualizer Qubes detected, continuing.
systemd-detect-virt result: qubes
[INFO] [systemcheck] PVClock Result: /sys/devices/system/clocksource/clocksource0/current_clocksource exist, is tsc.
[INFO] [systemcheck] Check Timezone Result: /etc/timezone, Etc/UTC matches Etc/UTC, ok.
[INFO] [systemcheck] Check Timezone Result: /usr/share/zoneinfo/Etc/UTC matches /etc/localtime, ok.
[INFO] [systemcheck] IP Forwarding Result: not running on Whonix-Gatway, skipping, ok.
[INFO] [systemcheck] Check Logs Result: /home/user/.msgcollector/msgdispatcher-error.log does not exist, ok.
[INFO] [systemcheck] Check Logs Result: /run/systemcheck/.msgcollector/msgdispatcher-error.log does not exist, ok.
[INFO] [systemcheck] Check Logs Result: /var/lib/systemcheck/.msgcollector/msgdispatcher-error.log does not exist, ok.
[INFO] [systemcheck] Check Logs Result: /home/user/.cache/tb/torbrowser_updater_error.log does not exist, ok.
[INFO] [systemcheck] Check Hostname Result: "hostname --fqdn" output is "host.localdomain", ok.
[INFO] [systemcheck] Check Hostname Result: "hostname" output is "host", ok.
[INFO] [systemcheck] Check Hostname Result: "hostname --ip-address" output is "127.0.0.1", ok.
[INFO] [systemcheck] Check Hostname Result: "hostname --ip-address" output is "localdomain", ok.
[INFO] [systemcheck] Entropy Available Check Result: ok. /proc/sys/kernel/random/entropy_avail: 256
[INFO] [systemcheck] Check nonfree Result: Ok, no nonfree packages found. For more information, see:
https://www.whonix.org/wiki/Avoid_nonfree_software
[INFO] [systemcheck] Whonix APT Repository: Enabled.
When the Whonix team releases BOOKWORM updates,
they will be AUTOMATICALLY installed (when you run apt-get dist-upgrade)
along with updated packages from the Debian team. Please
read https://www.whonix.org/wiki/Trust to understand the risk.
If you want to change this, use:
sudo repository-dist
[INFO] [systemcheck] Qubes UpdatesProxy Status Files Check: Ok.
[INFO] [systemcheck] Qubes UpdatesProxy Reachability Test: Trying to reach local Qubes UpdatesProxy... PROXY_SERVER: http://127.0.0.1:8082/
[INFO] [systemcheck] Qubes UpdatesProxy Reachability Result: Ok, torified update proxy reachable.
[INFO] [systemcheck] Qubes UpdatesProxy Connectivity Test: Skipped, because not using --leak-tests (--show-ip), ok.
[INFO] [systemcheck] check_tor_socks_or_trans_port SocksPort: Skipped, because not using --leak-tests (--show-ip), ok.
[INFO] [systemcheck] check_tor_socks_or_trans_port TransPort: Skipped, because not using --leak-tests (--show-ip), ok.
[INFO] [systemcheck] check_tor_socks_or_trans_port UpdatesProxy: Skipped, because not using --leak-tests (--show-ip), ok.
[INFO] [systemcheck] check_stream_isolation : Skipped, because not using --leak-tests (--show-ip), ok.
[INFO] [systemcheck] Debian Package Update Check: Checking for software updates via apt-get... ( Documentation: https://www.whonix.org/wiki/Update )
Hit:1 https://deb.qubes-os.org/r4.2/vm bookworm InRelease
Hit:2 tor+https://deb.whonix.org bookworm InRelease
Get:3 tor+https://fasttrack.debian.net/debian bookworm-fasttrack InRelease [12.9 kB]
Hit:4 tor+https://deb.debian.org/debian bookworm InRelease
Hit:5 tor+https://deb.debian.org/debian bookworm-updates InRelease
Hit:6 tor+https://deb.debian.org/debian-security bookworm-security InRelease
Hit:7 tor+https://deb.kicksecure.com bookworm InRelease
Hit:8 tor+https://deb.debian.org/debian bookworm-backports InRelease
Fetched 12.9 kB in 4s (2,863 B/s)
Reading package lists... Done
[INFO] [systemcheck] sudo apt-get dist-upgrade --simulate output:
Reading package lists...
Building dependency tree...
Reading state information...
Calculating upgrade...
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
[INFO] [systemcheck] Debian Package Update Check Result: No updates found via apt-get.
[INFO] [systemcheck] Warrant Canary Check: Skipping on Whonix-Workstation, ok.
[INFO] [systemcheck] Please donate!
See: https://www.whonix.org/wiki/Donate
[template workstation user ~]%
I tried to limit the number of concurrent template updates to 1 in the UI, and in this case one of the two templates fails (whichever is queued first), and another one succeeds.
Update: after doing more testing I have realized that the problem only happens if sys-whonix
is not set to start up on boot. By default, it was set to start automatically on boot, but I had disabled it as a performance optimization measure. If I re-enable sys-whonix
auto-start, then is no update error.
Qubes issues:
Start of Template does not first start its Qubes UpdateProxy VM. This only happens after running APT results in an qrexec call to UpdateProxy. This is because an UpdateProxy is not implemented similar to a Net qube where this is not an issue.
Is there a ticket for that already?
@adrelanos what is the method to wait/ensure that sys-whonix is ready to handle updates proxy connections? non-whonix vm can handle updates proxy requests directly after starting, so it isn't an issue there.
@adrelanos what is the method to wait/ensure that sys-whonix is ready to handle updates proxy connections?
As in "when Tor is ready to serve connections?" The short answer is: none.
Unfortunately there isn't reliable API for that. There is Tor control protocol status/circuit-established
which can return 1
(yes) but that doesn't mean a connection will succeed. The only way to test it would be to actually test it such as using curl against some clearnet or onion domain(s). But which ones? Avoiding single points of failure.
But is this even the issue here? I think no, because then we would see a different error message. I am maintaining the wiki page with all the different error messages and how to force reproducing these. This is for the purpose of being to able to pinpoint issues where these are happening exactly.
EDIT: Added to the wiki: https://www.whonix.org/wiki/Dev/Tor#Tor_Readiness_to_Serve_Connections_API
If there is no network connection (unplug network cable, route or disable WiFi) the error message would be a different one (for both, Debian and Qubes-Whonix):
Invalid response from proxy: HTTP/1.1 500 Unable to connect Server: tinyproxy/1.11.1 Content-Type: text/html Connection: close [IP: 127.0.0.1 8082]
Same error message if in sys-whonix
: sudo systemctl stop tor
In conclusion, this ticket is neither a network issue nor a Tor issue.
In Template:
sudo systemctl stop qubes-updates-proxy-forwarder.socket
Could not connect to 127.0.0.1:8082 (127.0.0.1). - connect (113: No route to host)
This is the error message in this very ticket.
Or in sys-whonix:
sudo systemctl stop qubes-updates-proxy.service
Reading from proxy failed - read (11: Resource temporarily unavailable) [IP: 127.0.0.1 8082]
We don't have reports for that issue.
So in conclusion we can say, not a sys-whonix issue and we can pinpoint this issue to qubes-updates-proxy-forwarder.socket
not running or not working properly?
Start of Template does not first start its Qubes UpdateProxy VM. This only happens after running APT results in an qrexec call to UpdateProxy. This is because an UpdateProxy is not implemented similar to a Net qube where this is not an issue.
Is there a ticket for that already?
Yes, there is: https://github.com/QubesOS/qubes-issues/issues/8718
connect (113: No route to host)
means that the kernel refused the call to connect(2)
. This means that either:
How to file a helpful issue
Qubes OS release
Qubes 4.1.1
Brief summary
When attempting to update Whonix templates, I frequently -- but not always -- encounter:
Steps to reproduce
Note: This is done after updating via
qubesctl
and restarting the FirewallVM.Expected behavior
Normal update output.
Actual behavior
Typical example:
Notes:
I have many other Debian templates that update normally (no errors) within the same run. (All using the same
sys-whonix
to update. All using the sameapt-get
commands to update.)Similar
Could not connect to 127.0.0.1
errors occasionally also happen with non-Whonix Debian templates, but with nowhere near the same frequency. It's observed with plain Debian maybe 10-20% of the time and with Whonix maybe 90% of the time.