Open BrettMayson opened 5 years ago
It's being caused by an em dash (—) in the filename RP_Commands — kopia.hpp
I determined this was being caused by the PBO being encoded with Windows-1252 filenames. The em dash (—) character is not able to be opened in a UTF-8 context and causes this bug.
So is this even a real bug in armake then? :thinking:
Yes, it can be handled
But only if the encoding has some kind of identifying header, right? Is this the case?
It can also be dropped by doing a lossy conversion to UTF-8, that is what I did for a workaround
By that I assume you mean decoding the input with UTF-8 and throwing away all invalid bits and pieces that come up during that conversion?
Yes
Not sure if that is a good practice that should be applied in the general case. If this messes something up, then the user might get really confused because the error message complains about character sequences that don't exist in the original. Or even worse: armake doesn't even detect the error but some of the scripts messed up. This error would then be detected in Arma leading to some really awkward bugs.
I think it would be better to error out on such things. At least by default. I think we could add another option that can be set to force the lossy conersion in such cases.
Handling this sloppily could lead to problems. Different tools handling this issue differently could lead to those tools calculating different hashes when signing or verifying PBOs, so a warning should be shown at least. Obviously it should be possible to handle it in some way since it would very simple to prevent unpacking of PBOs otherwise.
We could add multiple tries to read the PBO in different encodings (Try UTF-8, error -> Try another).
As an alternative we could do a lossy conversion if the normal reading fails and somehow mark the file, like adding a fat comment on top
I think the problem is to detect whether decoding with the current charset has failed. It's not like it woul throw exceptions when attempting to use the wrong charset. You'd have to actually go through the decoded content and try to check whether this looks like proper text/code or whether that is rubbish...
Using non lossy utf-8 will fail, at least with the RHS file from the original comment
Maybe this here could be handy for this: https://github.com/lifthrasiir/rust-encoding We could iterate all charets that are available in that library and see if the decoding works properly. If not (from the README I assume that this library does also throw errors if the decoding fails), try the next one (and so on).
That would be exactly what I meant. Good catch.
As I‘m doing a mass sign of files, I can say that probably 3-5% of all files (roughly 70GB of pbos) have non-UTF8 encoding. So it would be very beneficial, if armake2 would automatically switch through encodings if one fails.
I‘m wondering how armake1 handles this. Used that until now and it never throws any errors on this.
Another error but with packing
error: Failed to write PBO: Windows stdio in console mode does not support writing non-UTF-8 byte sequences
The error seems also to occur when reading an LZSS compressed PBO
Comes from
read_cstring
, one file I know that causes it is@rhsusaf/addons/rhsusf_c_radio.pbo
while trying to usePBO::read
orarmake2 unpack