Closed lmyslinski closed 4 years ago
Hi,
we did not encounter this problem since we use much smaller files. I looked at the Splitter
class and I think the problem is that it splits the data already after compression and encryption, however the comment /specification says it should be done after compression, encryption and base64. So base64 inflates the data by 33%. This would also explain why you have a total request size of 1.3Mb.
I think your solution to lower the splitting threshold is good, 750kb (1Mb / 1.33) should work, but 500kB is fine too (the specification allows splitting also if the limit of 1Mb is not reached)
Please send a pull request to fix this.
Sure, I can adjust the solution to be ~700kB and publish an MR. I've also added junit
, mockito
to the project and written some tests that verify the actual request. Should I add to the MR as well? Can you tell me where exactly is the base64 encoding applied in the codebase?
Sounds good! Tests are really missing in this project. The base64 encoding happens when it is converted to XML. So in this line sender.send(new ByteArrayContentFactory(uploader.prettyPrint()));
the prettyPrint
returns the object as XML and has the byte[] data then as base64
Yeah I started diving into the implementation here, but I gave up a few levels in. I'll look into it and publish the MR.
Also, where did you find the documentation you mentioned?
the EBICS specification is here: http://www.ebics.org/technical-information/ the file "2017-03-29-EBICS V 3.0-FinalVersion.pdf", look at Chapter 7 (page 150)
Thanks, didn't see it on the website before. I'm surprised it's actually kept up to date based on how outdated it looks
Hello, my company is using a fork of this library and encountered an issue with one the German banks at the end of last year. As part of the EBICS standard, a file upload request must be chunked in parts not bigger than 1MB. This is what the
Splitter
class is used for.However, it's only splitting the data that the file consists of, not the request itself (obviously). Once that data is put into a request (which is an XML entity with a data chunk embedded in it), a markup of 150-300kB is usually added. The result is that the total request size can be as big as ~1.3MB.
Recently a request got denied by the bank because the request size was over the 1MB limit. We've solved this by making sure individual data chunks are not bigger than 500kB. It's a slightly hacky solution, and I can't really not come to a conclusion that the bank must have wrongly implemented the standard, and it should be verifying the
data
chunk size instead of therequest
size.Has anyone else encountered this issue? If so, I'm happy to submit an MR with our fix. Please let me know what should the behaviour be on bank side. Thanks