Closed TravisEz13 closed 8 years ago
Just a few thoughts on this:
Would this mean that any MOF's that are encoded in UTF8 are actually incorrect and should be re-encoded in ANSI? Most DSC repos presumably all have UTF8 encoded MOF files currently. If this is the case it might be good to create a script that updates all the MOF files for all repos in one go.
Also, the current test checks ALL files are not unicode encoded. Therefore I think the test would need to be split in two:
A new Fixer would also need to be added to Meta.Fixers.ps1 - ConvertTo-ANSI().
ANSI is equal to UTF-8 on the 0-127 bytes (because it uses 1 byte to represent this values). So I don't see the problem.
I think there is a confusion introduced by the fact that PowerShell use Unicode as alias to UTF-16 encoding (2 bytes to represent all values). Unicode is not a particular encoding, but a character set. Here is a great and fun overview http://www.joelonsoftware.com/articles/Unicode.html .
Cool, thanks @vors - great article. I now realize my questions were a bit silly :smile:
Synced offline with Travis, I didn't realize that xDscResourceDesigner ask to convert mof to UTF-16 (but it also supports ANSI). So we have this loop that Meta tests ask to convert to UTF-8 to make Git happy and xDscResourceDesigner tests ask to convert to UTF-16 and people have blocking PRs because of it. Thanks @TravisEz13 for taking actions!
https://github.com/PowerShell/DscResource.Tests/blob/master/Meta.Tests.ps1#L110