Closed LysergikProductions closed 1 year ago
There are many possible solutions to consider here.
The first one is framing the packet if it is too large. Unfortunately, this is likely not possible due to the way Netty and the client read packets, especially since a new chunk data packet will replace all the old chunk data the client has for that existing chunk.
The second solution would be to just simply increase the limit to something higher. This’ll could possibly be a viable solution, though it is likely that this is either a Netty or a Minecraft limit (on both the client and the server’s end), and either one of those possibilities rules this one right out, as we cannot change anything on the client’s end.
The third and final solution would be to cut the chunk data off at the limit and then send the rest of the data in a single, or multiple, multi block change packets that then add the rest of the data to the chunk. This seems like it would most likely be the most viable, as it does not rely on Netty or the client. This could bring about it’s own issues in terms of performance though, as the more packets you send, the slower it will load (as packets are processed every tick, though this could be negated by ignoring the queue and just directly sending the packets in real time)
The actual issue here is likely the block entity data taking up a lot of data, especially if some of the furnaces contain items.
I believe the reason why it unloads just fine is because no chunk data is sent on unload, only on load, which is what PacketPlayOutMapChunk
is for.
P.S. please feel more than welcome to call me out if you think I’m talking complete and utter rubbish in some, or even all, of this comment.
It's also possible to split a single chunk packet to multiple chunk packets sending only specific chunk sections. But the issue is that you have to serialize the chunk before to check the result size, or just split all the chunks that way, which will be a performance hit either way.
1) not viable, client doesn't support this as it cares about the size of the actual MC packet, nothing else, 2MB is the limit here 2) see 1, you'd need to modify the client 3) The issue is that you need to determine where the split needs to be, I have some ideas on how to achieve this, but, there are definite performance concerns and I would need to rewrite how the "extra" packet stuff actually works, likely down into the packet encoder which is not something I'm all too fond of but I don't see a cleaner way in how to support this mess
I have an idea of how to do this, the issue is that we start needing to throwing around more buffers, we don't know the size of anything until we actually store it into a buffer which creates a lot of joyous aspects, as we'd basically need to have the main packet buffer, then another buffer for writing TE data to, another buffer to write individual data to so we can see if it's gonna overflow or not..
Using ProtocolLib I observed the output of the MapChunk packets in more detail. (ignore Integers:
that was a manual string left behind by mistake from previous debugging)
This console output comes from this MapChunk
packet data:
List<NbtBase<?>> blockEntityData = thisPacket.getListNbtModifier().read(0);
if (Config.verbose) System.out.println(blockEntityData);
I presume the error code numbers are the byte size of a single List<NbtBase<?>>
object.
Perhaps checking the size of these lists before they are sent to the user can be used to:
You don't know the size of the data until you actually store it in a buffer is the issue, see my last comment
Oh I assumed the player is kicked only after the packet is sent so as long as the packet is prevented from being sent before modification all would be well. The issue is not server-side in my experience.
I apologize for my lack of understanding though I'm very new to Minecraft development.
The issue is server side, the server kicks the client because an oversized packet hits the network pipeline which causes an exception which boots the client
Fixing this seems trivial on paper, but actually doing it is a whole different matter due to how this data actually works and with how sending works, I need to modify the way packet sending is handled to be able to account for these oversized packets
I understand. I appreciate you helping me understand better thanks!
Is there some non-size related way to divide that list? could then form it into multiple packets, the size of the list determined by int n
of those more or less equal subdivisions?
This way the size of anything need not be known, as we can guarantee a safe average packet size by just knowing in general approx. how big each somehow subdivided MapChunk TE nbt list piece is.
We already have a non-size based one in there
For what it's worth, I've coded a band-aid fix that successfully and reliably prevents players from getting disconnected by the server whatsoever.
I'm doing this by measuring the size of the packet before it is sent to the user, and cancelling the packet completely / removing the chunk ban when the packet's block entity data list is too big.
Because of your mentioning the player getting disconnected before we can measure the size, I find my successful results confusing in this context (since I presume paper has deeper access to packets all-along their stacks than PLib does)
The coded listener: S2C_MapChunkPackets()
Video from QA: https://youtu.be/fA48USxQdfA
Better yet, this works! Without having to remove the blocks! Note 8192 entities is arbitrary and will be adjusted closer to the limit.
public void onPacketSending(PacketEvent event) {
if (event.isCancelled()) return;
PacketContainer thisPacket = event.getPacket();
List<NbtBase<?>> blockEntityData = thisPacket.getListNbtModifier().read(0);
if (blockEntityData.size() > 8192) {
List<NbtBase<?>> truncatedList = new ArrayList<NbtBase<?>>(blockEntityData.subList(0, 8192));
thisPacket.getListNbtModifier().write(0, truncatedList);
}
}
paper already does that
i'm not sure why it would be so, but I am unable to reproduce the exception with this listener setup with all other variables identical to when the issue was 100% repro before adding the listener.
Perhaps understanding why this works for me could help resolve the issue at the level of the server.jar
Of course this will mean missing nbt data for some of the entities, but since I've literally never heard of this situation occurring in any context outside of intentional chunk bans it doesn't matter that the entities are missing data client-side.
Followed the reproduction steps and placed about 98,000 furnaces in a single chunk, was not able to reproduce the error when leaving and re-joining/moving away and back, so I'm going to close this as resolved.
Expected behavior
No amount of any block / item in a chunk will terminate the connection of users' clients.
Observed/Actual behavior
In this case, over 40k furnaces were placed in a single chunk. On chunk unload no issues occur.
However, when a client first enters render distance of this chunk whether it is loaded or not, they lose connection to the server with an internal exception:
Steps/models to reproduce
1- Start a brand new world 2- Use Litematica or WorldEdit to fill a single chunk with over 40k furnaces
3a- Unload the chunk 4a- Reload the chunk and observe issue
3b- Login with another client/account to the world, keeping the chunk ban loaded with the first account 4b- Approach the chunk ban from outside of render distance or join the world in range of the chunk ban
Plugin list
https://github.com/LysergikProductions/RVAS-Core <-- Disable this to reproduce the issue (I've implemented a fix) https://github.com/dmulloy2/ProtocolLib https://github.com/Arnuh/ArmorEquipEvent https://dev.bukkit.org/projects/laggremover
Paper version
git-Paper-607 (MC:1.16.5) (Implementing API Version 1.16.5-R0.1-SNAPSHOT)
Agreements
Note: after removing the furnaces with Amulet Editor, the chunk can once again be reloaded by players in-game.