Closed sapphire-arches closed 7 years ago
Regarding color spaces, I'm actually already doing brightness -> sRGB conversion in the VIP code (see https://github.com/emu-rs/rustual-boy/blob/master/rustual-boy-core/src/vip/mod.rs#L120, https://github.com/emu-rs/rustual-boy/blob/master/rustual-boy-core/src/vip/mod.rs#L128, https://github.com/emu-rs/rustual-boy/blob/master/rustual-boy-core/src/vip/mod.rs#L1127). However, this is not only incorrect visually (though it looks a lot better than not doing it, it prepares sRGB for monitor output, but doesn't correctly model the LED brightness curve, meaning that bugs like the frame glitches in Wario Land between screens [which are actually correctly emulated and happen on real hw] are more visible than they should be as the screen fades in/out), but it's also probably not the right place to do it now that we're introducing middleware. The interface we should probably have is that the VIP will output pixels in a linear color space (which I'd still prefer u8
for over f32
as we're not doing anything HDR here really) and, with a more general Sink
design, we could have a separate sink for converting linear -> sRGB just like we have for anaglyph -> single frame conversion (with basically the same sink interface/wrapping).
Another thing is that if we have our own Color
struct, we should probably make the internal buffer types in the emulator infrastructure use those directly (eg. Box<[Color]>
instead of u8
's or u32
's directly) as they should still be tightly packed, but a bit more convenient to work with. It's only when interfacing with stuff on the top-level (like minifb) where we specifically need u32
's or something, and then we can do a quick rearrange/copy. That would mean moving the Color
struct into the core, but I think it makes sense to have that semantic knowledge there, esp. if we have the various sink implementations as part of the middleware still.
Also I don't think I addressed whether or not we should have our own Color
struct; I'm ok with this in the same way I'm ok with us having our own Sink
type(s) as they're precisely what we need, so I think this is ok. :)
I converted the color struct from f32
to u8
. Color
should probably still live in the middleware, since the emulation core outputs two grayscale channels, and doesn't know or care about the anaglyph colors.
I'm not sure I see your vision for the color space transforming passes. Since the current VideoFrameSink
trait expects frames to be in the form of (Box<[u8]>, Box<[u8]>)
, should the middleware crate host a new trait that represents color space transformations?
On a somewhat related note, should I modify the VideoFrameSink
trait to include a means of determining whether or not the sink can currently provide a frame to the rending system? That seems to make the most sense if we're going to allow nesting of arbitrary Sinks
I converted the color struct from f32 to u8.
Cool, thanks :)
Color should probably still live in the middleware, since the emulation core outputs two grayscale channels, and doesn't know or care about the anaglyph colors.
Yeah, this sounds sensible to me.
I'm not sure I see your vision for the color space transforming passes. Since the current VideoFrameSink trait expects frames to be in the form of (Box<[u8]>, Box<[u8]>), should the middleware crate host a new trait that represents color space transformations?
I guess I'm more thinking VideoFrameSink
could be generalized to just Sink<T>
, and the emulator would expect a Sink<(Box<[u8]>, Box<[u8]>)>
for its video_frame_sink
(in which case VideoFrameSink
could be an alias for that type, but probably not terribly important to do it that way). Then the Sink
trait and anaglyph transform could look something like this (syntax may not entirely be correct):
trait Sink<T> {
fn append(&mut self, item: T);
}
// Not sure we can actually alias like this, but you get the idea :)
type VideoFrameSink = Sink<(Box<[u8]>, Box<[u8]>)>;
// AnaglyphFrameSink is a sink for stereo frames, but it will dump mono frames into its inner sink after conversion
struct AnaglyphFrameSink<T: Sink<Box<[Color]>>> {
inner: VideoFrameSink,
}
impl<T: Sink<Box<[Color]>>> AnaglyphFrameSink<T> {
fn new(inner: T) -> AnaglyphFrameSink<T> {
AnaglyphFrameSink { inner: inner }
}
fn into_inner(self) -> T {
self.inner
}
}
impl<Inner: Sink<Box<[Color]>>> Sink<(Box<[u8]>, Box<[u8]>)> for AnaglyphFrameSink<Inner> {
fn append(&mut self, item: (Box<[u8]>, Box<[u8]>)) {
// Do the conversion and dump the new Box<[Color]> into self.inner
}
}
The top-level code ends up being similar to how it is today:
// Now this stores a boxed color slice, which has better semantics anyways.
// It no longer conforms to `VideoFrameSink`, but it has the correct type for the output of the `AnaglyphFrameSink`
struct SimpleFrameSink {
inner: Option<Box<[Color]>>,
}
// each frame
let frame_sink = SimpleFrameSink { inner: None };
// Pass ownership to this; we'll take it back later
let anaglyph_frame_sink = AnaglyphFrameSink::new(frame_sink);
// step emu, passing in anaglyph_frame_sink for its video_frame_sink param
// pull frame out of the inner sink, if available
let frame_sink = anaglyph_frame_sink.into_inner();
// Check frame_sink's inner to see if we have a frame and display it if so
On a somewhat related note, should I modify the VideoFrameSink trait to include a means of determining whether or not the sink can currently provide a frame to the rending system? That seems to make the most sense if we're going to allow nesting of arbitrary Sinks
I think the "has frame ready"-ness of the sinks isn't very general; it's only useful at the very top-level in our current frontend, as it only cares about the latest frame produced. That's a semantic that would vary per frontend and configuration; for example, if we had a frame sink that dumps all frames to disk as .png's, we'd want every frame going through there to be dumped to disk, instead of dropping intermediate ones like we might do now in our current bootstrapping.
Perhaps this is a bit confusing, but it's really just like having iterators but backwards. My point is if we introduce some general frame mapping sinks I'd like them to be as general as possible, and I think this would be a cool way to do that. This would allow a bunch of arbitrary nesting of different color conversion sinks by the frontends; whatever they wanted to do, including adding their own with no special case code really. In fact, one kind of Sink
could even be constructed with a closure, so we could have a really general Sink
even that just pumps each item through the closure and dumps it into its inner Sink
:)
Yeah, that makes sense. I should be able to bang out a first past implementation at least.
The same could work for audio in that case as well; AudioFrameSink
is really just Sink<(i16, i16)>
and AudioBufferSink
is really just Sink<&[(i16, i16)]>
, though I don't really have a use case for it.
Indeed. Should that change be made as part of this PR or separately?
I think sticking it on this one would be cool, so we can see how it looks in-context and continue review if necessary :)
I think this is up for another review. A couple of things I'm not quite happy with:
Sink
versus SinkRef
. I feel like it should be possible to do something clever with lifetimes to remove the necessity of having both of these, but after banging my head against the borrow checker for like 2 hours I didn't get much of anywhere =/.VideoFrameSink
, AudioBufferFrameSink
etc. separate traits. It may be better to just use Sink<T>
everywhere, since with the current setup there's a bunch of empty trait implementations scattered around. On the other hand, it's nicer IMO to be able to say fn (v: VideoFrameSink)
instead of fn (v: Sink<VideoFrame>)
.I also have yet to really dive in to the color space stuff. It seems like nobody has really measured the gamma curve on the VB LEDs and unfortunately I don't have the equipment to do that.
Cool stuff :) My concerns are basically what you said, plus some stuff I may not have communicated very well. I think I'm gonna take a stab at this for an hour or two and report back, as I'm interested in seeing how we can fit all these bits together in the best way possible :)
Ok actually, what happened here is I read the code out of curiosity before work, and didn't re-read it before just reading your previous comment and digging in. A lot changed since this morning; I'm quite pleased generally with what you have here :) I'm toying with just removing the VideoFrameSink
etc traits as well as the SinkRef
trait and some other tiny things, so I'll just keep going and see if I find anything else.
Also, now that we have the middleware crate, we can probably move a lot of other things there eventually, like SystemTimeSource
, CpalDriver
, etc. I'll leave that for later though :)
Ah yeah, actually we might not want to move CpalDriver
at least to keep the middleware dependencies low, but we'll see.
I also have yet to really dive in to the color space stuff. It seems like nobody has really measured the gamma curve on the VB LEDs and unfortunately I don't have the equipment to do that.
Forgot to address this; I agree, and also don't have this equipment myself. In https://github.com/bobtwinkles/rustual-boy/pull/1 I at least pulled the sRGB adjustment out of the VIP, so the external code should assume it's linear at least. Later if the LED brightness can be measured accurately, we can do that in the VIP, which I think is the correct place to do it given its output is expected to be linear.
Thanks for the merge @bobtwinkles , this was fun to iterate on :)
PR to support common mode anaglyph rendering through an intermediate crate (
rustual-boy-middleware
). Marked as WIP because I'd like I'd like feedback on the anaglyph implementation;