Closed tzyloveopencv closed 2 years ago
Hi Jason, I send an email to to with the related files and image! Please help to see it
Hi tzyloveopencv, Is NON-HLOS.ubi the original UBI file from the device manufacturer? I've extracted, rebuilt, extracted the UBI image with out issue, no differences. I've also build an image from files_to_make_img extracted and rebuilt, no difference. The only difference I get is between files_to_make_img and the files extracted from NON-HLOS.ubi. It seems to me the errors are being introduced from outside ubi_reader. -Jason
Hi Jason,
Thanks for your reply ! NON-HLOS.ubi is built from files_to_make_img. I used the following command to build it.
The only difference that you mentioned is the problem i met. Sorry for that i'm not particularly familiar with ubi. So I can only guess that the problem is in mkfs.ubifs or ubireader. This problem has been bothering me for days, so I had to ask you for some help.
Thank you! Jayde
Hi Jason, I've confirmed that the problem is caused by ubireader_extract_files. I have used another ubi parsing tool before, the link is as follows: https://github.com/nlitsme/ubidump.
I tried to parse the ubi image with the ubidump.py. And the files extracted from the NON-HLOS is same as the original files. And I sent the result to your email.
Thank you!
Jayde
Hi Jason,
I parse the NON-HLOS.ubi with the ubidump.py and the extracted files is same with the original files.
The ubidump.py is downloaded from https://github.com/nlitsme/ubidump.
I changed some bugs in the script and Attatched it to this email.
Thanks!
-Jayde
------------------ 原始邮件 ------------------ 发件人: "jrspruitt/ubi_reader" @.>; 发送时间: 2022年4月27日(星期三) 凌晨2:11 @.>; @.**@.>; 主题: Re: [jrspruitt/ubi_reader] ubireader_extract_files problem (Issue #56)
Hi tzyloveopencv, Is NON-HLOS.ubi the original UBI file from the device manufacturer? I've extracted, rebuilt, extracted the UBI image with out issue, no differences. I've also build an image from files_to_make_img extracted and rebuilt, no difference. The only difference I get is between files_to_make_img and the files extracted from NON-HLOS.ubi. It seems to me the errors are being introduced from outside ubi_reader. -Jason
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
I'm not really sure what is going on here. It looks like ubi_reader is getting confused by a couple files. I can't figure out what is wrong. I'll see what I can do, but this might take a while. Thanks for pointing this out.
-Jason
Hi Jayde,
Looks like I got it fixed, that error has been there a long time. It was failing to handle blocks of data filled with 0x00 at the beginning of files, instead appending them to the end of the file. Thank you very much for pointing this out.
-Jason
Hi Jason,
Great! Thank you very much, it helps me a lot! I have to say that you are very efficient and capable. In fact, I also found the law of the problem and tried to solve it, but it didn't work. I have verified that you did solve the problem. However, it's embarrassing, i don't quite understand your point of modification. So i will learn the principle of your implementation.
Finally, thank you very much!!
-Jayde
------------------ 原始邮件 ------------------ 发件人: "jrspruitt/ubi_reader" @.>; 发送时间: 2022年4月29日(星期五) 凌晨3:16 @.>; @.**@.>; 主题: Re: [jrspruitt/ubi_reader] ubireader_extract_files problem (Issue #56)
Hi Jayde,
Looks like I got it fixed, that error has been there a long time. It was failing to handle blocks of data filled with 0x00 at the beginning of files, instead appending them to the end of the file. Thank you very much for pointing this out.
-Jason
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Hi Jayde,
last_khash = sorted_data[0].key['khash']-1
The old way of doing it, khash is a sequential set of numbers the keep the data nodes in order. Where I went wrong, was assuming the first one I found was where the file started, but UBIFS will not save blocks that are all 0x00. I had solved the issue for blocks in the middle and the end, but failed to do it for in the beginning. What I did not realize, is these numbers are the same for all data nodes, so the sequence starts at the same place, as seen below with start_key. Once I figured this out, then it was just a matter of setting lask_khash to start_key - 1 (-1 to fit in with the logic already in the script) so for ever number missing in the sequence starting there, a block of 0x00 data is to be written. start_key = 0x00 | (UBIFS_DATA_KEY << UBIFS_S_KEY_BLOCK_BITS)
These blocks of 0x00 are referred to as "holes" in the UBIFS source code. The 0x00 that is being OR'd in start_key is 0 for the first block of data, a block being 4096 bytes. The next block in the sequence would be 0x01. If a data node exists for that block, that number would be found in the key, or khash in the ubi_reader sources.
I hope that helps explain it, it was a very long time ago I wrote this particular code, it took me a while to put the pieces together this time.
Thanks again for bringing this up. -Jason
Hi Jason,
You are really nice, and you explained the principle to me, thank you so much! Now i understand.
The start k_hash is in a fixed position "0x20000000". If files begin with 0x00, you will lost some k_hashs, because the first k_hash is assigned as sorted_data[0].key['khash']-1.
UBIFS will not save blocks that are all 0x00.
I got it . Thanks a lot!
-jayde
------------------ 原始邮件 ------------------ 发件人: "jrspruitt/ubi_reader" @.>; 发送时间: 2022年4月29日(星期五) 中午11:36 @.>; @.**@.>; 主题: Re: [jrspruitt/ubi_reader] ubireader_extract_files problem (Issue #56)
Hi Jayde,
last_khash = sorted_data[0].key['khash']-1
The old way of doing it, khash is a sequential set of numbers the keep the data nodes in order. Where I went wrong, was assuming the first one I found was where the file started, but UBIFS will not save blocks that are all 0x00. I had solved the issue for blocks in the middle and the end, but failed to do it for in the beginning. What I did not realize, is these numbers are the same for all data nodes, so the sequence starts at the same place, as seen below with start_key. Once I figured this out, then it was just a matter of setting lask_khash to start_key - 1 (-1 to fit in with the logic already in the script) so for ever number missing in the sequence starting there, a block of 0x00 data is to be written. start_key = 0x00 | (UBIFS_DATA_KEY << UBIFS_S_KEY_BLOCK_BITS)
These blocks of 0x00 are referred to as "holes" in the UBIFS source code. The 0x00 that is being OR'd in start_key is 0 for the first block of data, a block being 4096 bytes. The next block in the sequence would be 0x01. If a data node exists for that block, that number would be found in the key, or khash in the ubi_reader sources.
I hope that helps explain it, it was a very long time ago I wrote this particular code, it took me a while to put the pieces together this time.
Thanks again for bringing this up. -Jason
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
No problem, glad I could help.
I used mkfs.ubifs to make a xxx.ubifs. And then i used ubireader_extract_file -k xxxx.ubifs. However there is three files content different from the old ones.