Open boris-petrov opened 5 years ago
More likely the remote server is closing down the connection for some reason, possibly as a result of not liking something in the write request. A packet capture would be helpful to figure out what might go wrong there.
Hi, thanks for the support! If by "packet capture" you mean something like a Wireshark dump, I think I have one. No idea if it is the right thing or if it contains the correct information but you can tell me if I did it wrong so I can try again.
By the way, the Samba server I'm using is on a NAS - QNAP TS-453 Pro. Maybe they have a broken implementation or there is a bug in jcifs-ng
?
Thanks for the dump. The remote server is unexpectedly closing the connection after the last write request we send (packet 1437). I really don't see anything wrong with that request, so this really might be a (somewhat strange) server issue. Maybe the server logfiles do provider further information, or you could try upgrading the firmware.
I've got another problem with the same NAS, not sure if it is related - when accessing a large file, it seems to take a very long time to download it. I can see in Wireshark a million requests like Read Request Len:8192 Off:88817664
- as far as I understand, jcifs-ng
makes a bunch of requests for 8kb of data. Why is that? Why doesn't it request much more? This causes massive slowdowns.
Most likely, the read size is limited by the buffer size you are giving it. Otherwise it may limited by the negotiated maximum size.
I'm using jcifs-ng
with Apache VFS, I don't think it does anything special for buffer sizes. What should I be looking for in Wireshark for the "negotiated maximum size"?
The max read/write sizes can be found in the SMB2 Negotiate response received from the server in the beginning of a connection. But I would much rather assume that this is a buffer size issue somewhere. 8192 seems to be the default buffer size for the BufferedOutputStream, so likely something before that is doing IO with smaller sizes.
OK, I was using Apache Tika to detect mime types and it actually copies an input stream locally by doing a Files.copy
which internally does in.transferTo(out)
which has a buffer of size 8192. So this is where this number came from... I'm not sure if this is a normal thing that the JDK does but if it is, doesn't that mean that jcifs-ng
has to do some internal buffering? As right now if I have an SmbFileInputStream
and do a transferTo
to some output stream, the same performance problem will occur. Not sure how these things are supposed to work.
P.S. transferTo
is a new method on InputStream
from Java 9 if you're wondering.
I'd really want to avoid adding buffering on the library side, this adds yet another memory copy and is totally unnecessary if used right. You could add a buffered stream wrapper around it.
This way around (SmbFileInputStream -> other OtherputStream) the behavior of the transferTo could be fixed by overriding the default implementation and using the correct buffersize I guess, but the other way around is out of our control.
This is the stacktrace:
I think the stacktrace should be read from the bottom up - that is, the bottom error causes the warnings above it.
I'm using Apache Commons VFS (which shouldn't matter). As you see, I called
close
on it, it calledflush
onBufferedOutputStream
which in turn calledwrite
onSmbFileOutputStream
. And this blows up withjava.net.SocketException: Connection reset
for mysterious reasons to me. I've been debugging this for a while now, do you have any idea what I might be doing wrong? Is it possible that my application code somehow causes the socket to be closed before the write is complete? How is this possible?