bkerler / edl

Inofficial Qualcomm Firehose / Sahara / Streaming / Diag Tools :)
GNU General Public License v3.0
1.59k stars 372 forks source link

setactiveslot: batch size too large 128 #511

Closed bongbui321 closed 6 months ago

bongbui321 commented 7 months ago

I'm trying to switch to another slot and get this error

program,read,nop,patch,configure,setbootablestoragedrive,erase,power,firmwarewrite,getstorageinfo,benchmark,emmc,ufs,fixgpt
firehose
firehose - [LIB]: slot A is active
firehose
firehose - [LIB]: in command patch
firehose
firehose - [LIB]: Error:
firehose
firehose - [LIB]: in command patch
firehose
firehose - [LIB]: Error:['INFO: Calling handler for patch', 'ERROR: patch size too large 128']
firehose
firehose - [LIB]: slot B is not active
firehose
firehose - [LIB]: in command patch
firehose
firehose - [LIB]: Error:
firehose
firehose - [LIB]: in command patch
firehose
firehose - [LIB]: Error:['INFO: Calling handler for patch', 'ERROR: patch size too large 128']

relates to https://github.com/bkerler/edl/issues/472

From the error message, it seems that the patch size of 128 is too large, I'm using a snapdragon845 so should be using Loader sda845_sdm845. Is this the correct length of 128 to be sent for this Loader, or is it different for different Loader?

I'm just testing the setactiveslot command for now. I haven't flash anything to the other slot (slot B in my case), could that be an issue for not being able to switch to that slot?

RenateUSB commented 7 months ago

I've always found the patch function in Firehose to be absurd. What's the matter with read, patch (using some desktop patcher), program? That way you can control the before and after.

If you want to get the patching done in the near future, just do it in three steps above.

bongbui321 commented 6 months ago

@bkerler hi, continuing from your suggestion of minimizing the data length to the patch, I have tried to delete the training 0x00 in the name of the partition and get it down to around 72 bytes, still the error is still the same (this time batch size is too large 72). I don't think it is good to minimize the other values such as (type, start and end lba, or unique) since these are the identifiers of the partition. I'm not sure if there is an error with the Loader.

Also, I have tried it with all the loaders within this (https://github.com/bkerler/Loaders/tree/main/qualcomm/factory/sdm845_sdm850_sda845), and all of them seems to output the same error.

You mentioned that it relates to the gpt structure but reading the gpt structure using 128 bytes is fine and it gets the right info. This can be easily confirmed from the cmd_read_buffer() which is used to read partition info. Therefore, I think the partition info should fit into 128 bytes

Hmm, I looked at this issue, and it seems patching with size 8 is fine. Do you think we should seperate the patch to partitions of 8 bytes and patch while updating the byteoffset? Or to just patch the 2 bytes that indicate whether a partition is active or not with the correct byteoffset? I'm not sure if the byteoffset only works in granularity of an entry (128 bytes) or "byte-by-byte" works as well?

can you confirm if there is a limit to patch size?

Edit: I PRed a fix