QubesOS / qubes-issues

The Qubes OS Project issue tracker
https://www.qubes-os.org/doc/issue-tracking/
536 stars 47 forks source link

Close high bandwidth covert channels #6900

Open 3hhh opened 3 years ago

3hhh commented 3 years ago

How to file a helpful issue

The problem you're addressing (if any)

Qubes OS has various high-bandwidth (hundreds of KB/s to MB/s) covert channels between VMs "built in".

One example is qrexec itself, which essentially uses direct memory mapping between VMs via Xen's libvchan to transfer data. This memory mapping is highly bidirectional and cannot be restricted to a one-way mapping in its current implementation. So an arbitrary qrexec call between two compromised VMs can be used to transfer arbitary amounts of data at memory mapping speed (> 100 MB/s ideally) between them.

Probably libvchan doesn't require too much authentication either.

IIRC xen store might be another option for a high bandwidth covert channel.

The solution you'd like

Those covert channels should be removed, wherever possible. Ideally only CPU and memory side channels should remain an issue. Those should be in the range of hundreds of KB/s usually.

For example for qrexec one could offer a one-way channel only. This way data could only be transferred in one way over qrexec. Of course it highly depends on the respective qrexec call, whether this is an option or not.

The value to a user, and who that user might be

Lower vulnerability exposure to covert channel attacks.

Notes

I'm fully aware of the Qubes OS stance on covert channels being not so relevant as they are often impossible to defeat and I fully agree. However one shouldn't use that as an excuse to neglect their existence, but fight back wherever possible. Anyway this would be more of a long term medium severity enhancement.

Currently Qubes OS is not affected too much by the aforementioned qrexec covert channel simply because there are not too many intra-VM services. With dom0 minimization that may change in the future.

Also some users are currently under the impression that their "no network" VM is somewhat equivalent to an airgapped computer. However with the existence of such covert channels that's simply not true (you usually don't get hundreds of MB/s of covert channels over the air without respective hardware).

DemiMarie commented 3 years ago

The first task to be done here is restricting grant tables. Right now any two cooperating qubes can share memory with each other, without having to go through dom0. This needs to be fixed before any of the other changes are meaningful at all.

3hhh commented 2 years ago

With regards to qrexec one-way channels:

If I understand it correctly, Qubes OS currently fully uses Xen's libvchan for its qrexec implementation (and libvchan was originally developed by Rafal for Qubes OS). libvchan in turn uses Xen grant tables.

Grant tables fully support unidirectional memory mappings by setting the memory region in the destination domain to read-only (GNTMAP_readonly flag).

So the bidirectional issue is about libvchan which currently uses strictly bidirectional ring buffers/grant tables as can be seen in its xengntshr_share_page_notify(... writable=1 ...) call here. If libvchan doesn't strictly need that writable flag, unidirectional channels are rather straightforward to implement as an option to that call. If however it does depend on it for acknowledgment messages or so, one would need an alternative to libvchan. Unfortunately my Xen knowledge is too limited to judge on that.

marmarek commented 2 years ago

If libvchan doesn't strictly need that writable flag

It does strictly need it: to get info which parts of the ring buffer have been already read by the client, so they can be overridden with subsequent data.

3hhh commented 2 years ago

On 11/2/21 6:57 PM, Marek Marczykowski-Górecki wrote:

If libvchan doesn't strictly need that writable flag

It does strictly need it: to get info which parts of the ring buffer have been already read by the client, so they can be overridden with subsequent data.

Thanks for the clarification!

Unfortunately that means that unidirectional channels may only be possible with a libvchan alternative. :-(

I'd then imagine the following options for such a one-way channel implementation:

3hhh commented 2 years ago

I recently stumbled across argo which seems to be pretty much what I want and is used by openxt already.

It is unidirectional on Xen hypervisor level, but unfortunately that design doesn't seem to have gone into the userspace library, which isn't upstream yet.

It also introduces full control by dom0 over shared data, protocol verification, access control, ...

Anyway from my voice recognition capabilities @marmarek seems to be well-informed about the topic... ;-)

I was also wondering about argo's performance, but unfortunately that question by @marmarek wasn't answered. However the author claims that it's used by the openxt GUI VM, i.e. it should be reasonably fast.

In total I guess it's a good candidate for a next major version of qrexec.

EDIT: Some more info on argo: https://openxt.atlassian.net/wiki/spaces/DC/pages/1770422298/HMX+Hypervisor-Mediated+data+eXchange

3hhh commented 2 years ago

Wrt argo performance: The openxt tests are for > 25 MB/s.

marmarek commented 2 years ago

Wrt argo performance: The openxt tests are for > 25 MB/s.

I hope that's just some sanity test for some catastrophic failure, and not really the range for its top performance... With the current qrexec I can easily get >500MiB/s