Open osmaczko opened 6 months ago
@richard-ramos, @cammellos, can you think of any other factors that should be considered before introducing this change?
CC: @John-44, @jrainville, @felicio
Something to keep in mind that maximum is not applied to the WakuMessage
itself but to the Gossipsub RPC
message and that protobuffer contains announcements regarding the pubsub topics they wish to subscribe to or unsubscribe from, and the waku messages are wrapped in a Message
protobuffer which includes also the pubsub topic on which a message is being published.
This means that the logic for segmenting messages must do so in values less than 150kb, since we need to take into account these other field, as well as the extra bytes that are added due to protobuffer serialization.
If this line is removed, https://github.com/status-im/status-go/blob/8086b24a9ed462cf70f2eee087629e81f2b51d9e/wakuv2/waku.go#L244 go-waku will use the default maximum of 150kb. Meaning that all the constants that are defined in status-go can be removed.
This means that the logic for segmenting messages must do so in values less than 150kb, since we need to take into account these other field, as well as the extra bytes that are added due to protobuffer serialization.
Thanks @richard-ramos. I has been already taken into account: https://github.com/status-im/status-go/blob/8086b24a9ed462cf70f2eee087629e81f2b51d9e/protocol/common/message_sender.go#L1339-L1345
I believe 37.5KiB (25% from 150) for metadata is more than enough :thinking:
@osmaczko I would not introduce it for the time being, in order to avoid unknowns, unless really necessary, since we have a tight release schedule
@osmaczko I would not introduce it for the time being, in order to avoid unknowns, unless really necessary, since we have a tight release schedule
We just discussed the same in our stand up :smile:
We agree, I set it for 2.29 so that Web has time to adapt their code to support chunks and also so that we have enough time to test it.
@fryorcraken what do you think about this? I think this might be adding a bit too much fuel to the fire, and might make it debugging message reliablity harder, what do you think?
For what it's worth, it would mostly affect community descriptions and image messages, since normal messages are smaller than that limit. However, I'm totally fine to push this to the next milestone if you guys prefer.
@fryorcraken what do you think about this? I think this might be adding a bit too much fuel to the fire, and might make it debugging message reliablity harder, what do you think?
I do not believe this would impact reliability so I would leave it out at this point in time.
Message size capping is helpful for:
If (1) is necessary then we can review. As I am not sure what would be the effect on latency if the messages are just being split in smaller chunks (my instinct says it would help overall but would need to verify).
For (2), this would be a necessary step to introducing RLN. We would want to introduce RLN when we want to scale 1:1 messages on a set of shards or if we want to have strong bandwidth guarantees for communities.
Cc @jm-clius
Agree with @fryorcraken on this, at this point better not make this change as this doesn't affect anything other than identifying bandwidth usage (that too once RLN is in place). Maybe this change can be taken up as part of next level scaling as mentioned above while RLN would be integrated.
That too, if the approach for RLN integration is going to be via a serviceNode that offers free-tiers, then the messageSize may not have an impact as message would be first sent to a serviceNode which would in-turn disperse in the network. But anyways, this is something to be taken a relook at during RLN integration.
Problem
As of now, the maximum waku message size is 1MiB. According to waku specs, the absolute maximum size should be 150KiB and the recommended one is 4KiB.
EDIT: Even though waku specifies the maximum to 150KiB, we shouldn't go higher than 125KiB.
https://github.com/status-im/status-desktop/issues/9923
Implementation
Notes
This change will result in the segmentation of most messages. The segmentation mechanism is designed to reconstruct messages with any segment size, ensuring backward compatibility with changes such as this one. Segmentation was introduced in version 2.27.0. Versions prior to this will not be able to process segmented messages.
Blockers