[x] hiding fields in favor of accessors. (Also: taking advantage of to reduce memory usage by keeping around Box<str> instead of String.)
[x] similarly, maybe we want to add a way to pass through codec-level non-fatal errors. CodecItem::Error or the like. We could just make CodecItem #[non_exhaustive] rather than figure out the details now. (edit: just did the #[non_exhaustive] for now.)
[x] removing the deprecated Session::teardown method.
[x] remove the pub from a few items that got it without much thought. (they're already #[doc(hidden)])
[x] replacing VideoFrame::new_parameters: Option<Box<VideoParameters>> with a boolean. You can obtain the latest parameters via Stream::parameters, so there's no good reason for the preemptive clone or for the extra bytes in every VideoFrame. Except:
[x] after retina::client::Demuxed::poll_next returns Pending, currently Stream::parameters may reflect changes in the upcoming frame. This is probably confusing / worth avoiding.
[x] Demuxed doesn't offer easy to access to the streams, but there's no reason it shouldn't.
[x] replace retina::codec::Parameters with retina::codec::ParametersRef, to avoid having to clone the parameters when returning them from retina::client::Stream::parameters.
[x] have Stream::setup take an options struct to allow customizing depacketizer options. Useful for e.g. #44.
[x] alter the expectations for PacketContext to be paired with a new type StreamContext, rather than just RtspConnectionContext. I'd expose StreamContext as an accessor on the stream rather than include it in MessageFrame. Notably, MessageFrame has a pair of PacketContexts, so this should shrink MessageFrame's size by four socket addresses, at the cost of the caller having to pull more stuff together to get enough information to find the packet in a pcap file.
[x] switch VideoFrame::data() and friends from returning a &Bytes to returning a &[u8], maybe adding a VideoFrame::take_data() that returns a Vec<u8>. If the caller wants to actually modify the bytes in-place for some reason, this would be better. I don't think there's any way to go from a Bytes back to a BytesMut, even if there are no other references. (I see an issue tokio-rs/bytes#350 that seems to be saying the same.)
[x] PacketItem should expose RTCP compound packets, not just sender reports.
I'm open to others.
Deferred
I'm going to punt on returning anything borrowed from the Session/Demuxed. This can't be done with the futures::stream::Stream interface (as Rust doesn't have GATs yet / it's unclear if Stream can be extended to support them anyway). I think switching to just an inherent method async fn next(&mut self) -> ...Item might be worthwhile, which would let us borrow from a ring buffer (see #6). But I don't want to make the improvements already implemented wait for that, and folks say "version numbers are cheap".
possibly PacketItem and CodecItem should include a &Stream to save the caller having to look it up. They should have it handy anyway. CodecItem::VideoFrame could also return the params: Option<&VideoParameters> (likewise audio/message media) rather than requiring the caller match on an Parameters enum.
reverse #4 decision: have codec items expose a Buf that borrows from the internal buffers, rather than copying into a contiguous buffer. If we borrow, we don't have to make a new Vec of chunks for each frame. We'll make it easy for the caller to copy into a new contiguous Vec as before, or to copy into some other structure (like a single Vec for multiple frames, a ring buffer, etc).
Other deferred stuff, because again version numbers are cheap, and these aren't as fully baked as the stuff above the fold:
switching from log to tracing. This would allow the caller to supply context for interpreting the log messages. Motivating example: Moonfire NVR maintains all its streams from the same tokio thread pool. When Retina logs a message about a stream, it's unclear which stream it applies to. Retina at least knows what URL we're talking with, but Moonfire NVR can wrap that in better context like the name of the camera and main/sub stream, and tracing seems like a popular way to augment log messages in this way. Retina might also put in its own spans for each session.
why defer: using tracing properly at the Moonfire NVR level would involve moving stuff into fields rather than just the message and rethinking the log format. Not quite ready to take that on.
revamping how packet loss is reported at the CodecItem level. I don't like that right now an item's loss field doubles as the number of packets lost (not necessarily consecutively) and information about if the current frame is "complete". I wonder if it's better to represent these as orthogonal concepts. Sometimes we know the loss was in a previous frame; sometimes we're uncertain; sometimes we know we're missing part of the current frame (a prefix, one or more interior regions, or a suffix). I'm not totally sure what the best representation is, but it might involve losing the loss field. Having a separate CodecItem::Loss would also give a way to represent richer information about the loss (like the packet number range, the times of the packet before/after the loss).
why defer: because I haven't thought through the best representation yet.
removing PlayOptions::enforce_timestamps_with_max_jump_secs. It's already optional, but I'm not sure it belongs at this level at all. The caller can do this validation if they care to, and it may not be used often enough to be worth advertising in this library. They may instead want to do something more sophisticated than trusting the RTP timestamps at all, given that many cameras don't use the monotonic clock to produce them. If I come up with one really battle-tested reliable strategy, it might belong in this library, but I don't think enforce_timestamps_with_max_jump_secs is that.
why defer: I wonder if I should go even further and make the u32->crate::Timestamp step optional/customizable. Also requires more design effort.
Work underway on the
next
branch.Breaking changes I'm considering:
Box<str>
instead ofString
.)CodecItem::Error
or the like. We could just make CodecItem#[non_exhaustive]
rather than figure out the details now. (edit: just did the#[non_exhaustive]
for now.)Session::teardown
method.pub
from a few items that got it without much thought. (they're already#[doc(hidden)]
)VideoFrame::new_parameters: Option<Box<VideoParameters>>
with a boolean. You can obtain the latest parameters viaStream::parameters
, so there's no good reason for the preemptive clone or for the extra bytes in everyVideoFrame
. Except:retina::client::Demuxed::poll_next
returnsPending
, currentlyStream::parameters
may reflect changes in the upcoming frame. This is probably confusing / worth avoiding.Demuxed
doesn't offer easy to access to the streams, but there's no reason it shouldn't.retina::codec::Parameters
withretina::codec::ParametersRef
, to avoid having to clone the parameters when returning them fromretina::client::Stream::parameters
.Stream::setup
take an options struct to allow customizing depacketizer options. Useful for e.g. #44.PacketContext
to be paired with a new typeStreamContext
, rather than justRtspConnectionContext
. I'd exposeStreamContext
as an accessor on the stream rather than include it inMessageFrame
. Notably,MessageFrame
has a pair ofPacketContext
s, so this should shrinkMessageFrame
's size by four socket addresses, at the cost of the caller having to pull more stuff together to get enough information to find the packet in a pcap file.VideoFrame::data()
and friends from returning a&Bytes
to returning a&[u8]
, maybe adding aVideoFrame::take_data()
that returns aVec<u8>
. If the caller wants to actually modify the bytes in-place for some reason, this would be better. I don't think there's any way to go from aBytes
back to aBytesMut
, even if there are no other references. (I see an issue tokio-rs/bytes#350 that seems to be saying the same.)PacketItem
should expose RTCP compound packets, not just sender reports.I'm open to others.
Deferred
I'm going to punt on returning anything borrowed from the
Session
/Demuxed
. This can't be done with thefutures::stream::Stream
interface (as Rust doesn't have GATs yet / it's unclear ifStream
can be extended to support them anyway). I think switching to just an inherent methodasync fn next(&mut self) -> ...Item
might be worthwhile, which would let us borrow from a ring buffer (see #6). But I don't want to make the improvements already implemented wait for that, and folks say "version numbers are cheap".PacketItem
andCodecItem
should include a&Stream
to save the caller having to look it up. They should have it handy anyway.CodecItem::VideoFrame
could also return theparams: Option<&VideoParameters>
(likewise audio/message media) rather than requiring the caller match on anParameters
enum.Buf
that borrows from the internal buffers, rather than copying into a contiguous buffer. If we borrow, we don't have to make a newVec
of chunks for each frame. We'll make it easy for the caller to copy into a new contiguousVec
as before, or to copy into some other structure (like a singleVec
for multiple frames, a ring buffer, etc).Other deferred stuff, because again version numbers are cheap, and these aren't as fully baked as the stuff above the fold:
log
totracing
. This would allow the caller to supply context for interpreting the log messages. Motivating example: Moonfire NVR maintains all its streams from the same tokio thread pool. When Retina logs a message about a stream, it's unclear which stream it applies to. Retina at least knows what URL we're talking with, but Moonfire NVR can wrap that in better context like the name of the camera and main/sub stream, andtracing
seems like a popular way to augment log messages in this way. Retina might also put in its own spans for each session.tracing
properly at the Moonfire NVR level would involve moving stuff into fields rather than just the message and rethinking the log format. Not quite ready to take that on.CodecItem
level. I don't like that right now an item'sloss
field doubles as the number of packets lost (not necessarily consecutively) and information about if the current frame is "complete". I wonder if it's better to represent these as orthogonal concepts. Sometimes we know the loss was in a previous frame; sometimes we're uncertain; sometimes we know we're missing part of the current frame (a prefix, one or more interior regions, or a suffix). I'm not totally sure what the best representation is, but it might involve losing theloss
field. Having a separateCodecItem::Loss
would also give a way to represent richer information about the loss (like the packet number range, the times of the packet before/after the loss).PlayOptions::enforce_timestamps_with_max_jump_secs
. It's already optional, but I'm not sure it belongs at this level at all. The caller can do this validation if they care to, and it may not be used often enough to be worth advertising in this library. They may instead want to do something more sophisticated than trusting the RTP timestamps at all, given that many cameras don't use the monotonic clock to produce them. If I come up with one really battle-tested reliable strategy, it might belong in this library, but I don't thinkenforce_timestamps_with_max_jump_secs
is that.crate::Timestamp
step optional/customizable. Also requires more design effort.