Closed kennystrawnmusic closed 1 year ago
Thanks a lot for this PR!
I agree that the current framebuffer configuration is missing some crucial outputs. We currently choose the last mode that matches the min_width
and min_height
requirements, but I don't think that UEFI guarantees any particular ordering of GOP modes. So it depends on the machine's firmware whether we end up with the smallest or largest mode.
So I think that we're missing at least the following config options:
This PR seems to partially address points 2 and 3, but it doesn't allow configuring the new behavior. I think I would prefer a less-opinionated approach that allows users to configure the behavior themselves. So how about the following:
maximum_framebuffer_height
and maximum_framebuffer_width
fields to bootloader_boot_config::FrameBuffer
.prefer_resolution
field with Highest
and Lowest
as options, or a boolean flag named e.g. prefer_lowest_resolution
.frame_buffer_vm: Framebuffer
field to bootloader::BootConfig
. The bootloader will use this configuration when it detects that it's running inside a virtual machine (as implemented in this PR).frame_buffer_vm.prefer_lowest_resolution = true
, and probably some min bounds to avoid a tiny windowframe_buffer.prefer_lowest_resolution = false
and no maximum boundsThis way, we get a similar behavior by default, but users can adjust the behavior as they like. What do you think?
- We add a new
frame_buffer_vm: Framebuffer
field tobootloader::BootConfig
. The bootloader will use this configuration when it detects that it's running inside a virtual machine (as implemented in this PR).
frame_buffer_vm: Option<FrameBuffer>
would be an even better idea still than this. After all, the current implementation already has it as an optional type as is.
Back to the drawing board; going to reconfigure this using that suggestion. Much better I think.
Alright, submitted newer commits that address both of your concerns:
HypervisorInfo::identify()
to single out Xen as a special case deserving of treatment like physical hardwareThanks a lot for this PR!
I agree that the current framebuffer configuration is missing some crucial outputs. We currently choose the last mode that matches the
min_width
andmin_height
requirements, but I don't think that UEFI guarantees any particular ordering of GOP modes. So it depends on the machine's firmware whether we end up with the smallest or largest mode.So I think that we're missing at least the following config options:
Maximum bounds for the resolution.
- I imagine that this this would be useful when e.g. a 1080p monitor is connected to a 4k-capable GPU?
A way to specify whether we prefer the highest or lowest resolution within the min/max bounds.
A way to special-case the framebuffer configuration when using QEMU or other virtual machines.
This PR seems to partially address points 2 and 3, but it doesn't allow configuring the new behavior. I think I would prefer a less-opinionated approach that allows users to configure the behavior themselves. So how about the following:
We add new
maximum_framebuffer_height
andmaximum_framebuffer_width
fields tobootloader_boot_config::FrameBuffer
.We add a new field to control whether to prefer the highest or lowest available resolution in the given min/max range. For example, this could be a
prefer_resolution
field withHighest
andLowest
as options, or a boolean flag named e.g.prefer_lowest_resolution
.We add a new
frame_buffer_vm: Framebuffer
field tobootloader::BootConfig
. The bootloader will use this configuration when it detects that it's running inside a virtual machine (as implemented in this PR).We set reasonable defaults for the new fields. For example:
frame_buffer_vm.prefer_lowest_resolution = true
, and probably some min bounds to avoid a tiny window
frame_buffer.prefer_lowest_resolution = false
and no maximum boundsWe update both the BIOS and UEFI implementations to respect the configuration.
This way, we get a similar behavior by default, but users can adjust the behavior as they like. What do you think?
Changed the title of this pull request to reflect these suggestions
@kennystrawnmusic what exactly happened?
I see you added new commits yesterday, then closed this PR with no context? (and my comment was entirely ignored too)
Looks like some of the PR was moved to a new PR: https://github.com/rust-osdev/bootloader/pull/394
@kennystrawnmusic what exactly happened?
I see you added new commits yesterday, then closed this PR with no context? (and my comment was entirely ignored too)
Looks like some of the PR was moved to a new PR: https://github.com/rust-osdev/bootloader/pull/394
The other PR was on a completely different branch that doesn't have these changes (only the ones that add in the SystemTable<Runtime>
address while leaving the config structures unchanged) due to it being of a higher priority. Will reopen this one when I have the time to fix these checks (eventual intention is to reverse them, i.e. to only check for QEMU and nothing else, when determining what mode to use).
I'd still like to discourage enforcing such behaviour on QEMU.
If the firmware is deployed to a VM it can be for production usage and utilize the full display. One would expect the resolution to not be messed with due to serving as a convenience during development when a hypervisor detected, it should have an opt-in / opt-out.
It may look nice to have a slighltly lower resolution when virtualizing the OS you're building, but during real hardware tests it makes far more sense to choose a higher-resolution mode than during QEMU tests. Otherwise, you're left with 2 options:
So, creating this pull request to fix the problem. Using
raw_cpuid
makes it possible to check whether or not you're running inside a hypervisor before setting a display mode, which is important for improving this, and using.max()
on the mode iterator when real hardware is detected ensures that the absolute highest resolution possible is used in that case.