Closed josephlr closed 2 years ago
Also, using an atomic, shared, static type better reflects how the processor uses the GDT. It's almost always read-only, can be shared between cores
I don't see any problems sharing the code/data segments, but sharing the same TSS can lead to problems, right? How would we avoid that? Add multiple TSS entries to the same TSS?
Also, using an atomic, shared, static type better reflects how the processor uses the GDT. It's almost always read-only, can be shared between cores
I don't see any problems sharing the code/data segments, but sharing the same TSS can lead to problems, right? How would we avoid that? Add multiple TSS entries to the same TSS?
Sharing a TSS is still not possible because the CPU marks the TSS descriptor as busy. Attempting to load a busy TSS descriptor will cause a general protection fault. If we really wanted to share a GDT between cores, we'd have to use multiple TSS descriptors.
The code changes look good to me, I just think that we need some additional docs about sharing a GDT between cores (i.e. what is allowed and what not).
So this roughly what I added in #366. Those additional docs make it clear that load_tss
can only be called exactly once for each SegmentSelector
. It also documents that users shouldn't use tss_segment
to create multiple descriptors pointing to the same TSS.
I think @Freax13's answer mostly explains why it wouldn't be possible (without nasty unsafe
code) for users to share TSS Descriptors between cores. Of course there's nothing stopping them from sharing the same TSS (though multiple descriptors) between cores. While this is silly, as the TSS usually contains stack pointers which "should" be per-core. The processor lets you do it and it's not inherently unsound. So I think our documentation is enough there.
Thanks a lot for clarifying! I missed #366, looks very good!
Depends on #380
In #322 it was pointed out that calling
load_tss
would modify the GDT entry for the TSS (to mark it as busy). This meant our GDT required some sort of interior mutability if we wanted to use it in astatic
context. #323 solved this problem by requiring&mut
to callload
andload_unsafe
, thus ensuring that the GDT would be mutable after it was loaded. It also introducedSingleUseCell
to make such manipulations more ergonomic.This PR takes a different approach. The idea is basically the initial version of #323 using
AtomicU64
. The main difference is that:Entry
type from #380 to make the code cleaner.AtomicU64
if it's possible for code to callload
andload_tss
, so we can gate this on#[cfg(feature = "instructions")]
.By doing things this way, we can go back to using shared references for
load
andload_unsafe
, which is much more ergonomic. We can then get rid ofSingleUseCell
(which doesn't really fit well with this x86_64-specific crate). Also, using an atomic, shared, static type better reflects how the processor uses the GDT. It's almost always read-only, can be shared between cores, and once in use is only updated atomically. In fact, the Intel documentation forLTR
even says:In the future, we could also use the interior mutability of our GDT structure to add a method like:
which would allow for a GDT to be a normal
static
item, and eliminate some uses oflazy_static!
.Finally, I also added some tests to make sure that
load_tss
changing the TSS entry is actually observable.