Open DasBabyPixel opened 1 year ago
Thanks for the issue. If you want to create a PR we would appreciate that
I have been investigating a little bit and it seems this issue occurs because there are multiple PacketDispatcher threads on the wrapper. Everything is sent in the right order from the node, but the channelmessage queryresult gets read on one packetdispatcher while a second packetdispatcher is still writing to the temporary file. Btw: This issue should also apply to file transfers from wrapper to node.
The core issue is: The requesting ChannelMessage gets answered before writing to the file has finished
There are multiple options to solve this:
The future from #transferChunkedData may only return after the receiver has finished receiving the file, not the sender sending. This requires the receiver to send a confirmation packet
The other option is for the receiver to cache the request in form of a future (similar to how ChannelMessage Queries work). The receiver then, after receiving the queryResult has to wait for the cached request. The cached request is filled from the TemplateStorageCallbackListener. This options does not require any additional protocol (which is why I prefer it), however it might be a pain because the RemoteTemplateStorage is not in the wrapper, but in the driver (So the zipTemplate would have to store the cached request in a class in the driver) To that end it might make sense to either move the RemoteTemplateStorage to the wrapper (because it is only used in there) or the TemplateStorageCallbackListener to the driver because they are both a part of the same API (And don't work without eachother).
Just wanted to ask if there is a preferred way of doing this before coding it and the PR getting rejected for sth. Also if I overlooked sth or there is a better way you know please don't hold back
Stacktrace
Actions to reproduce
Try to open a template as a ZipInputStream from a wrapper
templateStorageProvider.localTemplateStorage().openZipInputStreamAsync(ServiceTemplate.parse("proxy/default"))
CloudNet version
[19.08 22:10:36.283] INFO: [19.08 22:10:36.283] INFO: CloudNet Blizzard 4.0.0-RC9 f6ca4c38 [19.08 22:10:36.283] INFO: Discord: https://discord.cloudnetservice.eu/ [19.08 22:10:36.283] INFO: [19.08 22:10:36.284] INFO: ClusterId: 0cac9bb5--45dd--5d9b389b1b0f [19.08 22:10:36.284] INFO: NodeId: Node-768d8079 [19.08 22:10:36.284] INFO: Head-NodeId: Node-768d8079 [19.08 22:10:36.284] INFO: CPU usage: (P/S) .23/7.21/100% [19.08 22:10:36.284] INFO: Node services memory allocation (U/R/M): 2524/2524/6000 MB [19.08 22:10:36.284] INFO: Threads: 50 [19.08 22:10:36.284] INFO: Heap usage: 40/256MB [19.08 22:10:36.284] INFO: JVM: Eclipse Adoptium 17 (OpenJDK 64-Bit Server VM 17.0.6+10) [19.08 22:10:36.285] INFO: Update Repo: CloudNetService/launchermeta, Update Branch: beta (development mode) [19.08 22:10:36.285] INFO:
Other
Template might need to be a few MB in size, very small sizes might not trigger the synchronization bug, but for larger sizes it is triggered for sure and the first Exception gets called. The second exception is a bug where the temp directory doesn't exist in the wrapper.
The two exceptions are in the same bug report because they go hand in hand here and here.
The synchronization bug I mean is that the ChannelMessage query returns before the upload/download has finished and the file has been released.
The first exception is before the second in the console. Order is kind of important here...
I need this kind of urgently which is why I would offer to make a PR. Adding something like mkdirs for the temp folder should suffice for stacktrace 2.
For stacktrace 1 I'd probably redirect the future (after the received query use a future[callback?] from the DefaultFileChunkedPacketHandler. ) Another option I though of (to make it more future-proof) is some sort of confirmation from the receiving end of DefaultFileChunkedPacketSender.
Let me know if I should do this, or if the TemplateStorage API not working is enough to get this to the top of your guys' todo-list :)
Issue uniqueness