KoffeinFlummi / armake2

Successor to armake written in Rust
GNU General Public License v2.0
50 stars 17 forks source link

Handle UTF-8 errors when reading PBOs #34

Open BrettMayson opened 5 years ago

BrettMayson commented 5 years ago
thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: FromUtf8Error { bytes: [82, 80, 95, 67, 111, 109, 109, 97, 110, 100, 115, 32, 151, 32, 107, 111, 112, 105, 97, 46, 104, 112, 112], error: Utf8Error { valid_up_to: 12, error_len: Some(1) } }', src/libcore/result.rs:997:5

Comes from read_cstring , one file I know that causes it is @rhsusaf/addons/rhsusf_c_radio.pbo while trying to use PBO::read or armake2 unpack

BrettMayson commented 5 years ago

It's being caused by an em dash (—) in the filename RP_Commands — kopia.hpp

BrettMayson commented 5 years ago

I determined this was being caused by the PBO being encoded with Windows-1252 filenames. The em dash (—) character is not able to be opened in a UTF-8 context and causes this bug.

Krzmbrzl commented 5 years ago

So is this even a real bug in armake then? :thinking:

BrettMayson commented 5 years ago

Yes, it can be handled

Krzmbrzl commented 5 years ago

But only if the encoding has some kind of identifying header, right? Is this the case?

BrettMayson commented 5 years ago

It can also be dropped by doing a lossy conversion to UTF-8, that is what I did for a workaround

Krzmbrzl commented 5 years ago

By that I assume you mean decoding the input with UTF-8 and throwing away all invalid bits and pieces that come up during that conversion?

BrettMayson commented 5 years ago

Yes

Krzmbrzl commented 5 years ago

Not sure if that is a good practice that should be applied in the general case. If this messes something up, then the user might get really confused because the error message complains about character sequences that don't exist in the original. Or even worse: armake doesn't even detect the error but some of the scripts messed up. This error would then be detected in Arma leading to some really awkward bugs.

I think it would be better to error out on such things. At least by default. I think we could add another option that can be set to force the lossy conersion in such cases.

KoffeinFlummi commented 5 years ago

Handling this sloppily could lead to problems. Different tools handling this issue differently could lead to those tools calculating different hashes when signing or verifying PBOs, so a warning should be shown at least. Obviously it should be possible to handle it in some way since it would very simple to prevent unpacking of PBOs otherwise.

Soldia1138 commented 5 years ago

We could add multiple tries to read the PBO in different encodings (Try UTF-8, error -> Try another).

As an alternative we could do a lossy conversion if the normal reading fails and somehow mark the file, like adding a fat comment on top

Krzmbrzl commented 5 years ago

I think the problem is to detect whether decoding with the current charset has failed. It's not like it woul throw exceptions when attempting to use the wrong charset. You'd have to actually go through the decoded content and try to check whether this looks like proper text/code or whether that is rubbish...

BrettMayson commented 5 years ago

Using non lossy utf-8 will fail, at least with the RHS file from the original comment

Krzmbrzl commented 5 years ago

Maybe this here could be handy for this: https://github.com/lifthrasiir/rust-encoding We could iterate all charets that are available in that library and see if the decoding works properly. If not (from the README I assume that this library does also throw errors if the decoding fails), try the next one (and so on).

Soldia1138 commented 5 years ago

That would be exactly what I meant. Good catch.

As I‘m doing a mass sign of files, I can say that probably 3-5% of all files (roughly 70GB of pbos) have non-UTF8 encoding. So it would be very beneficial, if armake2 would automatically switch through encodings if one fails.

I‘m wondering how armake1 handles this. Used that until now and it never throws any errors on this.

Reidond commented 5 years ago

Another error but with packing
error: Failed to write PBO: Windows stdio in console mode does not support writing non-UTF-8 byte sequences

ArwynFr commented 3 years ago

The error seems also to occur when reading an LZSS compressed PBO