moq-wg / moq-transport

draft-ietf-moq-transport
Other
86 stars 22 forks source link

Object Model mapping to QUIC #244

Closed suhasHere closed 9 months ago

suhasHere commented 1 year ago

Recent discussions around the points highlighted in https://datatracker.ietf.org/doc/html/draft-jennings-moq-usages-00 has made it clear that the core transport protocol needs to provide advices on how to moqt object model maps to the QUIC.

Given that different use-cases have slightly varying requirements, there is also need for applications to specify their preference on how to use QUIC streams to transport objects and groups within a track from the publisher to the subscriber.

Here is the proposal for Transport Delivery Mode enumeration as part of SUBSCRIBE OK message as an experimental addition to the protocol for publishers to express their interest .

Following delivery modes are proposed

The core idea behind such experimental extension is to help under the application, deployment and interoperability requirements before the final path is chosen for the core transport

wilaw commented 1 year ago

In moq-transport, an intermediary knows how to extract objects from incoming Streams (there may be 1..N Objects in a Stream), read their metadata (such as Group and priority assignments), cache them and potentially drop them depending on prioritzation settings. They also need to know how to forward them, which I believe is the intent of this issue. Perhaps we should reframe these as forwarding preferences? They instruct intermediaries beyond the current hop in how to propagate the objects . The various forwarding types can be enumerated in the spec and should be a new field in the header of each Object.

If the intermediary is not making a new Stream for every Object it forwards, then it needs to bind a new Object to some existing Stream. I term this the "binding ID". I have relisted the groups, adding in the binding ID that an intermediary would use.

Send each object in a new stream (default?) Binding ID: None needed.

Send all objects with the same Group in the same stream Binding ID: namespace + Group number

Send all objects with the same Priority in the same stream Binding ID: namespace + priority value

Send all Objects in the same track in the same stream Binding ID: namespace + track name

kixelated commented 1 year ago

I want something simpler.

If a relay receives objects A, B on QUIC stream 1 and objects C, D on QUIC stream 2, then the relay continues that relationship. The relay MUST NOT reposition objects within a QUIC stream.

Object per stream: A:1 B:2 C:3 D:4 Group per stream: AB:1 CD:2 Layer per stream: AC:1 BD:2 Track per stream ABCD:1

Bonus points if objects on the same stream share a header. ex. prioritization is applied at a stream granularity, not object granularity.

wilaw commented 1 year ago

If a relay receives objects A, B on QUIC stream 1 and objects C, D on QUIC stream 2, then the relay continues that relationship.

How would caching work under this scheme? Caching introduces a temporal discontinuity to the transmission. Should the relay continue to cache by Object, or should it instead cache by Stream?

kixelated commented 1 year ago

If a relay receives objects A, B on QUIC stream 1 and objects C, D on QUIC stream 2, then the relay continues that relationship.

How would caching work under this scheme? Caching introduces a temporal discontinuity to the transmission. Should the relay continue to cache by Object, or should it instead cache by Stream?

You would cache by stream and byte offset, like HTTP chunked-transfer. If you evict A from the cache, you would also evict B from the cache.

If the application chooses to put A+B on the same QUIC stream in that order, then it expects A+B to be delivered in that order. It the application wanted those objects to arrive in any order, then it would have put A and B on separate streams.


Should we support a situation where a client transmits A+B on the same stream, but expects the relay to split them into separate A and B streams to downstream clients? I don't think so, because of congestion on the first mile. Once you introduce head-of-line blocking, you can't remove it.

To oversimplify, it's like converting TCP -> UDP. If there's congestion on the TCP hop, then UDP packets will arrive in TCP order despite being split. This still has benefits for last-mile congestion versus TCP -> TCP, but to handle first-mile congestion what you really want is UDP -> UDP.

Vice versa, should a client transmit A and B on separate streams, but the relay combines them into A+B for downstream clients? Absolutely not, as you actually suffer from both first-mile congestion and last-mile congestion.

The analogy is UDP -> TCP. Any unreliable or out-of-order delivery at the first-mile needs to be fixed before it can be converted to reliable and ordered delivery. This is then exasperated by prioritization, as the relay wants object A to unblock B, but the sender may decide that C is higher priority.

These same principals apply for RTMP -> WebRTC and WebRTC -> HLS. I spent years working on the former at Twitch and we gave up because the user experience was terrible. We offered WebRTC -> HLS but ironically it introduces more latency and is a worse user experience than RTMP -> HLS.


I strongly believe that relays need to provide: X -> X

If an application introduces head-of-line blocking by putting content on the same stream, then the relay must maintain that head-of-line blocking. The relay doesn't try to introduce more, nor does it try to introduce less.

This is also amazingly simple for relay; they just proxy QUIC streams. The application still needs stream prioritization for inter-stream relationships, but now it can use stream ordering for intra-stream dependencies.

For example, the base catalog and deltas would be on the same QUIC stream. There's no need for parent sequence number as the relay would deliver the base and deltas in order. There's no reason for the relay to cache them separately, as evicting the base but not the deltas from the cache would be a mistake.

kixelated commented 1 year ago

@fluffy mentioned something at the start of the call that I wanted to address.

It sounded like the client was transmitting a single QUIC stream, and he wanted parts of that stream to be cached independently.

However I doubt you actually want this behavior, because you just reinvented RTMP. The whole point of groups is that they're independent, but this property is lost when they share the same QUIC stream. They wouldn't actually be independent for the first-mile delivery, and just like I mentioned in the last post, any first-mile congestion will reek havoc on last-mile delivery.

I think you want to split each group into a separate QUIC stream: independent groups = independent delivery = independent streams

This is why I want MoQ group == QUIC stream. However, I would be fine with MoQ object == QUIC stream, because like @hardie brought up at the start, I can use a single object per group to get the desired behavior. But something needs to be specified, because a relay MUST NOT be able to move my objects to other streams.

suhasHere commented 1 year ago
  • If this is a single GoP, then it can't be cached independently. If a new subscriber joins then it needs the entire GoP. You have to make multiple GoPs if you want independent join/cache points.

When a new subscriber joins mid GOP, there are only 2 options to have any meaningful experience

This goes to issue #245

suhasHere commented 1 year ago

This is why I want MoQ group == QUIC stream. However, I would be fine with MoQ object == QUIC stream, because like @hardie brought up at the start, I can use a single object per group to get the desired behavior. But something needs to be specified, because a relay MUST NOT be able to move my objects to other streams.

I think we are mixing 2 different levels of mappings here.

  1. how an application maps its data to moqt object model

    • One object as GOP or one object as encoded video frame of 33ms is entirely up to application.
  2. how aspects of moqt object model is mapped to the QUIC

    • Once you have application data mapped to the moqt object model, then it comes down to question on how moqt objects/groups are transported via QUIC mechanisims. This issue talks about this mapping in particular.

Yes application needs to know how to do 1 and 2, but the moqt transport only needs to specify 2.

Also caching needs to be done at object level since it allows retrieval from cache provided track, group and object information as keys.

I don't think we should normatively specify that relay shouldn't map things differently from ingest to egress. But yes a given relay implementation can choose to map it differently and it does indicate it via the transport delivery mode.

suhasHere commented 1 year ago

This is why I want MoQ group == QUIC stream

You can achieve this via delivery mode of stream per group. If a group has a single object (say GOP), it just caches one object for that GOP.

kixelated commented 1 year ago
  • If this is a single GoP, then it can't be cached independently. If a new subscriber joins then it needs the entire GoP. You have to make multiple GoPs if you want independent join/cache points.

When a new subscriber joins mid GOP, there are only 2 options to have any meaningful experience

  • Start from the beginning of the current GOP to get the IDR
  • Wait for the next GOP

This goes to issue #245

Yeah, my point is that you shouldn't be sending multiple GoPs over the same QUIC stream, otherwise it's not possible to serve the latest GoP during congestion. Using a single stream for multiple GoPs is only optimal when there's zero congestion, in which case just use TCP lul.

This is why I want MoQ group == QUIC stream. However, I would be fine with MoQ object == QUIC stream, because like @hardie brought up at the start, I can use a single object per group to get the desired behavior. But something needs to be specified, because a relay MUST NOT be able to move my objects to other streams.

I think we are mixing 2 different levels of mappings here.

  1. how an application maps its data to moqt object model
  • One object as GOP or one object as encoded video frame of 33ms is entirely up to application.
  1. how aspects of moqt object model is mapped to the QUIC
  • Once you have application data mapped to the moqt object model, then it comes down to question on how moqt objects/groups are transported via QUIC mechanisims. This issue talks about this mapping in particular.

Yes application needs to know how to do 1 and 2, but the moqt transport only needs to specify 2.

Yeah, the problem is that without 2 the application can't decide 1. The transport needs to provide properties that the application can use. Right now MoqTransport OBJECTs are closer to jumbo-datagrams because they're semi-reliable and semi-ordered. It's a huge red flag when the application can't reliably deliver deltas in order because of the object/group abstraction.

At a minimum, the application needs the ability to deliver a stream of bytes over MoqTransport. We should absolutely use QUIC streams for that instead of reinventing them via an increasingly convoluted object model with flags.

Also caching needs to be done at object level since it allows retrieval from cache provided track, group and object information as keys.

Caching should be performed at the stream level via byte ranges. Even with objects, you want to cache at the byte range level, as waiting to receive an entire object before caching/serving can only add latency. For example, an I-frame takes multiple round-trips to deliver (especially with packet loss), so don't want to wait to receive the entire frame before caching/serving it.

Here's how a relay should work: Read chunk from upstream, (optional) write chunk to cache, forward chunk to downstream. A chunk is a stream offset+length, aka a QUIC stream frame.

I don't think we should normatively specify that relay shouldn't map things differently from ingest to egress. But yes a given relay implementation can choose to map it differently and it does indicate it via the transport delivery mode.

I think the relay MUST maintain the stream mapping. Otherwise the application MUST assume the lowest common denominator, where some dumb relay decided to send every object over separate streams in an arbitrary order, or decided to drop arbitrary objects in the middle of a group.

Imagine if a HTTP relay was allowed to deliver a HTTP response body in an arbitrary order or unreliably. It would break so many assumptions and force the application to anticipate this substandard delivery, even if 99% of relays delivered the body correctly. Or it would become a defacto standard that relays MUST deliver the body reliably and in order, in which case we should have been made an official standard.

kixelated commented 1 year ago

Here's how a relay should work: Read chunk from upstream, (optional) write chunk to cache, forward chunk to downstream. A chunk is a stream offset+length, aka a QUIC stream frame.

Adding to this tangent, caching based on byte offset instead of frame number is far more efficient. Let me explain why.

Let's assume we send an entire 2s GoP via QUIC stream. A subscriber lost their connection half-way through a GoP and wants to request it again:

In the ideal scenario, that's one lookup into the cache table. We advance the data pointer X bytes to the start and read until the requested Y byte. This memory is mostly on the same page, and can be written to the QUIC stream in one operation.

In the poor scenario, the relay has to do N cache table lookups. These are likely all in separate memory pages and there's N writes to the QUIC stream. N depends on the frame rate if every frame is a separate object.

suhasHere commented 1 year ago

Yeah, the problem is that without 2 the application can't decide 1. The transport needs to provide properties that the application can use

That's exactly the TransportDeliveryMode in this issue is addressing.

suhasHere commented 1 year ago

Otherwise the application MUST assume the lowest common denominator, where some dumb relay decided to send every object over separate streams in an arbitrary order, or decided to drop arbitrary objects in the middle of a group.

Applications themseleves has a choice to send every object in its own stream. Relays may choose

Even with objects, you want to cache at the byte range level, as waiting to receive an entire object before caching/serving can only add latency. For example, an I-frame takes multiple round-trips to deliver (especially with packet loss), so don't want to wait to receive the entire frame before caching/serving it.

I am not sure how was the conclusion reached here that the entirety of the object has be to received before serving downstream though. If an object is a I-Frame/GOP, the bytes/fragments come in and are sent to caching layer as fragments for building the cache for that object and also forwarded to all the subscribers in parallel as fragments. This issue for delivery mode doesn't affect that behavior in any way.

kixelated commented 1 year ago

Yeah, the problem is that without 2 the application can't decide 1. The transport needs to provide properties that the application can use

That's exactly the TransportDeliveryMode in this issue is addressing.

Yeah, we both agree on that front and the disagreement is just over the implementation. Instead of N different modes toggling which messages constitute a QUIC stream, I literally just need a promise that the relay can't rearrange QUIC stream contents. Even if we do add TransportDeliveryMode, I still need a promise that the relay won't change it.

ie. if an application creates a QUIC stream with only object 1 and object 2 on it, then the relay MUST also transmit a QUIC stream with only object 1 and object 2 on it, and in that order. If a relay is allowed to split them into separate streams or insert object 3 in the middle, then it completely breaks many applications.

With that property in place, I do think the stream header could deduplicate some OBJECT properties, but that's an optimization and not actually required. It just seems like a bug to send two different objects with different priorities or groups over the same QUIC stream.

Otherwise the application MUST assume the lowest common denominator, where some dumb relay decided to send every object over separate streams in an arbitrary order, or decided to drop arbitrary objects in the middle of a group.

Applications themseleves has a choice to send every object in its own stream. Relays may choose

Relays should absolutely not be able to choose; it can only create problems. The application made the decision to deliver objects over separate QUIC streams for a very explicit reason. The relay lacks any information about the application by design so if it changes the delivery mode, it's either making a completely uninformed and likely detrimental decision, or it's applying arbitrary business logic.

Even with objects, you want to cache at the byte range level, as waiting to receive an entire object before caching/serving can only add latency. For example, an I-frame takes multiple round-trips to deliver (especially with packet loss), so don't want to wait to receive the entire frame before caching/serving it.

I am not sure how was the conclusion reached here that the entirety of the object has be to received before serving downstream though. If an object is a I-Frame/GOP, the bytes/fragments come in and are sent to caching layer as fragments for building the cache for that object and also forwarded to all the subscribers in parallel as fragments. This issue for delivery mode doesn't affect that behavior in any way.

That was mostly from our old conversations around if objects are atomic. But I absolutely agree, relays should cache/forward streams.

suhasHere commented 1 year ago

I just wanted to clarify though , caching is done for objects , however relay need not wait for the full object to arrive before forwarding to the subscribers

kixelated commented 1 year ago

I just wanted to clarify though , caching is done for objects , however relay need not wait for the full object to arrive before forwarding to the subscribers

There's no distinction. If you can serve partial objects to existing subscribers (forwarding), then you can serve those partial objects to new subscribers too (caching).

In theory you could differentiate between existing and new subscribers, but that gets very complicated to implement and is detrimental when objects are not atomic (ex. GoP).

You might be talking about addressability, ie. you can't request a byte range within object. But the object cache itself is absolutely broken into a dynamic list of byte chunks.

suhasHere commented 1 year ago

A object is the addressable entity from the moqt application perspective. How it gets delivered ( as chunks or fragments or few number of bytes) is lower layer transport level decision which is many times driven by MTu and other factors. Moqt applications ask for an object and they get it delivered. We should focus on whats the application model here

suhasHere commented 1 year ago

At IETF118 there was support to add explicit indicator for object model to transport mapping. PR #333 captures 2 proposals that were discussed during Boston interim.

please review #333 and if there are suggestions o a totally different way to achieve the same results, please propose it here.

kixelated commented 1 year ago

One thing missing from the discussion is ordering/reliability. The entire point of sending OBJECTs over specific streams is to utilize those properties so the application can decode in order. It's pointless to have stream mapping if OBJECTs can be reordered/dropped on those streams.

Anyway, here's my proposals to round out the list:

Proposal 3 (implicit): Send objects in the same manner as they were received. Proposal 4 (implicit==explicit): Same as proposal 3 but add a MoQ Stream ID field to OBJECT.

But like I alluded to at the start, you need more than the Stream ID to be fully explicit. You also need signals to reproduce the ordering, gaps, and stream end. And all of a sudden you're reimplemented the QUIC STREAM frame.

I'm a massive fan of implicit stream mapping. I don't think there's any other option.

fluffy commented 1 year ago

@kixelated - I think it would be good to write down in the PR the details on implicit and cover how this works when a client switched from networks or there is an error and how it works with cached data. It may be all of this is easy but I want to get my head fully around it in trying to sort this out.

kixelated commented 1 year ago

@kixelated - I think it would be good to write down in the PR the details on implicit and cover how this works when a client switched from networks or there is an error and how it works with cached data. It may be all of this is easy but I want to get my head fully around it in trying to sort this out.

Do you want me to push directly to that PR?

QUIC automatically handles network migration so you're talking about a hard disconnect. Something like going through a tunnel and triggering the idle timeout (ex. 10s).

The easy answer is that all streams are reset on connection loss. An application that uses long-lived streams will need to support multiple streams if they want to support reconnects. For example, if the catalog stream is reset due to connection loss, the new publisher can make a new stream with group += 1. You can still keep the ANNOUNCE/SUBSCRIBE alive, but any streams from the old publisher are reset.

Attempting to resume a stream on a new connection is difficult to impossible, regardless of implicit versus explicit. The problem like I mentioned is gaps; if you're using a stream then the decoder expects objects to arrive in a specific order.

The old connection would have to use QUIC ACKs to guess which streams/objects were received just in case it crashes. However, ACKs doen't actually mean that the OBJECTs were actually flushed to the application. You could guess but we would likely have to add application ACKs (ex. OBJECT_OK) primarily for this feature. You would also need the relay to throw out duplicate objects because it may have already received an object but wasn't able to ACK it. Otherwise sending the same OBJECT twice over a stream is likely to break a decoder. It's a whole mess.

afrind commented 1 year ago

Individual Comment:

I have another proposal, which is is along the lines of @huitema 's idea from #270. Combine the Proposal 2 Mode from the OBJECT wire format with the message type, and use a stream header to avoid repeating the same or potentially mismatched information.

eg:

StreamPerObject: Message = OBJECT_STREAM (same as OBJECT today -- eg no payload length)

Datagram: The same as OBJECT_STREAM but with no serialized type field (or add one if that's too "implicit")

Multi Object Streams start with a stream header message and the fields that are constant across all objects on that stream.

StreamPerGroup:

STREAM_HEADER_GROUP {
  TrackAlias
  Group Sequence
}
SHORT_OBJECT_GROUP {
  Object Sequence
  Send Order
  Payload Length
  Payload
}

With similar constructs for StreamPerTrack and StreamPerPriority. The SHORTOBJECT* types are not serialized on the wire, it's inferred from the stream header. I will note this adds more flexibility than Proposal 2, eg a publisher could mix all 5 modes of object delivery in single track, though we could further restrict it if we wanted to.

I wrote text for this proposal but wanted to pitch the idea here first.

kixelated commented 1 year ago

Hmm, so there's some nuances to that proposal @afrind. Can SHORT_OBJECT_GROUP be reordered and/or dropped? Basically, what happens when sequence=3 has a higher sendOrder (lower priority) than sequence=4?

kixelated commented 1 year ago

I think we need to take a step back and really ask ourselves what properties do we want from the transport. I'm worried that we're throwing solutions at the wall without analyzing the problem.

An application has live media that it needs to break into pieces. Mostly avoiding existing terminology for the moment, let's say we break a track into independent/streamable fragments which are then broken into chunks. Each fragment can have different modes:

As a thought experiment, let's consider how an application would want to send a JSON catalog with (JSON) delta updates:

Each catalog would be a reliable/ordered/framed fragment in this scheme. The first chunk is the base JSON and each subsequent chunk is a JSON delta.

Here's some possible use-cases for each type of fragment: reliable ordered framed example
yes yes yes frames in decode order (without container)
yes yes no frames in decode order (with container)
yes no yes chat messages with best-effort ordering?
yes no no none
no yes yes lossy frames in decode order
no yes no self-repairing codec?
no no yes lossy frames in arbitrary order
no no no none

Note that sending a single chunk per fragment is the same for all modes. This is what RUSH currently does and it uses a reassembly buffer to reorder. However, it would be neat if MoqTransport could perform that reordering via unreliable/ordered delivery, so the decoder could read in order by skip over any holes (ex. drop b-frames during congestion).

kixelated commented 1 year ago

Okay cool, but this concept is kind of useless unless we can actually implement it. Here's what is possible in draft-01:

reliable ordered framed stream per objects
yes yes no fragment one (unsized)
no no yes chunk one (unsized)

The rest are not guaranteed due to the lack of a stream mapping. A relay can decide to reorder/drop/move OBJECTs on a whim. The only way to achieve guaranteed behavior is to use a single, unsized object per stream so the relay can't coalesce them. I think we all agree that this is a problem.

Let's start with the easy proposal (3). Implicit mapping means the relay is not allowed to change the contents of a QUIC stream and all OBJECTs are delivered in the same position.

reliable ordered framed stream per objects
yes yes yes fragment per chunk
yes yes no fragment one
no no yes chunk one

You can now send multiple OBJECTs on the same stream and have them reliably delivered (ex. catalog). But if you to do more complex stuff, like unreliable/ordered (ex. lossy frames in decode order) then you still need to perform reassembly in the application.

The problem with Alan's StreamPerGroup proposal is that it's unclear if chunks are reliable/ordered. If they are, then you don't need sendOrder or even sequence per SHORT_OBJECT_GROUP. If they're not, then it's ambiguous.

But what about taking that proposal a step further and explicitly include the reliable/ordered flags in the STREAM_HEADER_GROUP. There would always be a stream per fragment with at least one chunk. This would allow:

reliable ordered framed relay can
yes yes yes proxy only
yes yes no proxy only
yes no* yes reorder chunks
no* yes yes drop chunks
no* no* yes drop/reorder chunks

The giant caveat here is that any dropping/reordering within a stream can only be performed at the MoqTransport layer. All chunks are reliable/ordered via QUIC streams, but the relay MAY drop chunks if there's sufficient back-pressure (like TCP low-watermark).

However if latency is important, then you shouldn't perform reliable/ordered delivery in QUIC and then pretend like it unreliable/unordered in the MoQ layer. This is actually one of my complaints with SCTP; it gives the application the ability to specify unordered/unreliable delivery on a per-message basis but it's a facade.

I don't think we should include reliable/unordered or unreliable/ordered messages on a QUIC stream because it's a foot gun. QUIC just doesn't work like that. Despite our object model, OBJECTs are not actually the unreliable/unordered unit, it's actually a QUIC stream and we have to remember that.

afrind commented 1 year ago

Individual Comment:

I think it's interesting to consider where reorderable and droppable properties fit in the object model, but I think we can tackle the problems separately and make incremental progress. My proposal takes the draft-01 "implicit" signal and makes it explicit and more compact, without changing anything else.

suhasHere commented 1 year ago

unfortunately draft-01 as-is implemented wasn't implicit or explicit as shown cleary during interop.

afrind commented 1 year ago

unfortunately draft-01 as-is implemented wasn't implicit or explicit as shown cleary during interop.

Sure, my point is that the implicit signal exists whether you read/use it or not, and this explicit scheme conveys equivalent information.

suhasHere commented 1 year ago

OBJECT* types are not serialized on the wire, it's inferred from the stream header. I will note this adds more flexibility than Proposal 2, eg a publisher could mix all 5 modes of object delivery in single track, though we could further restrict it if we wanted to.

in this case, I wonder if flexibility will come bit us. I need to think a bit more though ..

suhasHere commented 1 year ago

I feel we are over complicating the issue at hand by mixing different things. Let me share my thinking and see if I messed it up

Applications know what treatment they need from the transport and they need a way to indicate their intention to the moq layer and this issue is about helping build that abstraction. For applications who don't know what to do , the draft needs to specify the reasonable default.

afrind commented 1 year ago

Discussed offline with Suhas:

In Proposal 3, every Object has the forwarding preference, but in some modes it is "compressed" on the wire by including a stream header. From an API perspective, publishers would publish objects with a mode and subscribers would receive objects with a mode, like Proposal 2.

vasilvv commented 1 year ago

I've thought about this for a bit, and I think I'm also a fan of "explicit-implicit" design. What I mean by this is that we formally define a way to describe the object placement (that is used in the APIs and can be serialized into the cache), but we don't actually send most of it on the wire since it's redundant (a "compression" of sorts).

kixelated commented 1 year ago

Oh, and to be clear, I think Christian and Alan's stream header is absolutely the correct direction. My rambling is only because the presence of sendOrder implies that objects on a stream are not reliable/ordered.

Controversial take: For all of these stream mapping proposals, OBJECTs on a stream MUST be delivered in reliably and in order.

Why? Well the entire point of mapping objects to streams is to get these properties. If a relay is allowed to arbitrarily reoreder or drop OBJECTs within a stream, then we're right back to where we started without stream mapping.

For example, we want to produce a catalog with an OBJECT for each delta. These objects are placed on the same stream so the decoder doesn't need to implement a reassembly buffer or detect gaps.

stream: base delta1 delta2 delta3 delta4

If we allow a relay to drop or reorder objects on the same stream for whatever reason, then this is possible:

stream: delta1 base delta4 delta2

For anybody who implemented moq-clock this week, I'm sure this looks familiar. It's what you get with a stream per OBJECT, stream mapping actually served no purpose.

I can live with flag indicating that objects on a stream are potentially unreliable/unordered, but I do caution that QUIC streams are reliable/ordered by nature. If an application wants unreliable or unordered OBJECTs within a stream, it's possible but head-of-line blocking will limit the upside. These objects really should be sent over a dedicated stream or datagram instead.

huitema commented 1 year ago

@kixelated if multiple objects are sent on a single stream, they are by definition delivered reliably and in order. Why do you think that they would not be?

kixelated commented 1 year ago

@kixelated if multiple objects are sent on a single stream, they are by definition delivered reliably and in order. Why do you think that they would not be?

OBJECTs on a stream are reliable/ordered over QUIC, but do we agree that they're reliable/ordered at rest too? Basically a relay MUST transmit OBJECTs in the same manner as they arrived.

afrind commented 1 year ago

Individual Comment:

The different send order on the same stream is covered in issue #105, and I don't think we need to resolve that before we address the basic transport mapping.