godotengine / godot-proposals

Godot Improvement Proposals (GIPs)
MIT License
1.16k stars 97 forks source link

Add ability to configure endianness of PackedByteArray #9586

Open aGuyWhoMadeGames opened 6 months ago

aGuyWhoMadeGames commented 6 months ago

Describe the project you are working on

A risc-v emulator that uses PackedByteArrays to store registers and memory.

Describe the problem or limitation you are having in your project

Currently the encode_* and decode_* functions seem to use big endian byte ordering. This makes it very hard to interact with little endian data.

Describe the feature / enhancement and how it helps to overcome the problem or limitation

The feature I am proposing is to add an option to change the byte ordering for a specific PackedByteArray. There are more details in the next section. I'm sure this addition would be helpful for many other situations involving reading and writing binary data other than mine too. This would still be backwards compatible because the default would be the same.

Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams

Consider tis piece of code:

func _ready():
    var data = PackedByteArray([0x00,0xff])
    print("%x"%data.decode_u16(0))
    # prints "ff00"

It takes some binary data, reads it as a uint16_t and then prints it in hexadecimal. As you can see, the data is read as big endian because the more significant byte it located to the right of the less significant byte. The two best ways I can think of to change the endianness are:

func _ready():
    var data = PackedByteArray([0x00,0xff])

    # a bool
    data.little_endian = true

    # or an enum
    data.endianness = PackedByteArray.ENDIANNESS_LITTLE_ENDIAN

    # now it reads the data as little endian
    print("%x"%data.decode_u16(0))
    # prints "00ff"

The data is then read as little endian; the more significant byte is to the left of the less significant one. This could also be implemented as an argument of the encode_* and decode_* functions. eg: data.decode_u16(0,true) would read the data as little endian.

If this enhancement will not be used often, can it be worked around with a few lines of script?

The data could be reordered before being read, but that would be cumbersome both to do the reordering and to remap byte offsets, or data could be read with multiple calls to decode_u8() function and then rearranged and combined using bitwise operations, but this would also be cumbersome, especially for signed integers.

Is there a reason why this should be core and not an add-on in the asset library?

This is a simple addition to built-in data type.

timothyqiu commented 6 months ago

You can use the good old StreamPeerBuffer for this :)

var data = PackedByteArray([0x00, 0xff])
print("%04x" % data.decode_u16(0))  # ff00

var peer := StreamPeerBuffer.new()
peer.data_array = data
peer.big_endian = true
print("%04x" % peer.get_u16())  # 00ff
Calinou commented 6 months ago

The approach I use in https://github.com/Calinou/slippi-hitsounds:

const FRAME_START_SIZE = 12 + 1
const PERCENT_OFFSET = 0x3c
var pl1 = payload.slice(FRAME_START_SIZE + PERCENT_OFFSET, FRAME_START_SIZE + PERCENT_OFFSET + 4)
pl1.reverse()
var new_player1_percentage := pl1.decode_float(0)

For context, see the original PR that added decoding methods to PackedByteArray:

  • All Little Endian because Big Endian is dead.

Given this, I wouldn't hold my hopes for seeing big-endian support in PackedByteArray. It's mostly used in legacy systems after all (such as an integration with a GameCube game I linked to above).

YuriSizov commented 5 months ago

For context, see the original PR that added decoding methods to PackedByteArray:

  • All Little Endian because Big Endian is dead.

Given this, I wouldn't hold my hopes for seeing big-endian support in PackedByteArray.

MIDI is Big Endian, so I wouldn't be so categorical about this. You can configure endianness of files you write in Godot, but not of PackedByteArray encoding and decoding, which leads to unnecessary hurdles.

r-owen commented 4 months ago

png images are also big endian. In my case I got caught while manually decoding tEXt chunks -- chunk size and CRCs are uint32 and so need special handling. Either of the suggested enhancements would be very helpful. It's error-prone and clumsy to have to manually swap every slice of bytes that represents a multi-byte integer.