ivlovric / HFP

HEP Fidelity Proxy
15 stars 7 forks source link

HFP does not work for established calls when interrupting ssh tunnel #13

Closed bilalrao12 closed 1 year ago

bilalrao12 commented 1 year ago

Hi,

I am using HFP with SSH tunnel so that when ssh tunnel is down, HFP stores the packets and sends them when tunnel is up again.

AGENT -------> HFP ---------> SSH TUNNEL -----------> DEST SERVER

I tested this with below 2 scenarios. For new connections it works fine i.e storing packets and sending when tunnel is up. However, for the active connections during which tunnel was interrupted, it throws this error and does not send the remaining packets.

2023/07/07 07:30:23 ||--> Sending HEP OUT error: write tcp4 192.168.10.14:42496->192.168.10.14:9062: write: broken pipe 2023/07/07 07:30:23 ||-->X File Send HEP from buffer to file error read tcp4 192.168.10.14:9060->192.168.10.27:57982: use of closed network connection.


Testing pattern 1:

take down ssh tunnel make and complete the test call bring up ssh tunnel

Result: Works fine


Testing pattern 2:

ssh tunnel up make a call and then bring down ssh tunnel During the Call bring up ssh tunnel again

Result: In this case the complete call flow (all packets) was not there for the call during which tunnel gets down.

[root@localhost ~]# [root@localhost ~]# [root@localhost ~]# [root@localhost ~]# ./HFP -l 0.0.0.0:9060 -r 192.168.10.14:9062 -d on 2023/07/07 07:30:05 mkdir HEP: file exists Saved HEP file is 0 bytes long Listening for HEP on: 0.0.0.0:9060 Proxying HEP to: 192.168.10.14:9062 IPFilter: IPFilterAction: pass Prometheus metrics: 8090

HFP started in proxy high performance mode


2023/07/07 07:30:05 HELLO HFP c==>V|| INITIAL Dial LOOPBACK IN success 2023/07/07 07:30:05 -->|| New connection from 127.0.0.1:55612 2023/07/07 07:30:05 ||--> Connected OUT 192.168.10.14:9062 2023/07/07 07:30:05 -->|| Got 9 bytes on wire -- Total buffer size: 65535 2023/07/07 07:30:05 ||--> Sending init HELLO HFP successful without filters to 192.168.10.14:9062

2023/07/07 07:30:19 -->|| New connection from 192.168.10.27:57982 2023/07/07 07:30:19 ||--> Connected OUT 192.168.10.14:9062 2023/07/07 07:30:19 -->|| Got 1154 bytes on wire -- Total buffer size: 65535 2023/07/07 07:30:19 ||--> Sending HEP OUT successful without filters to 192.168.10.14:9062 2023/07/07 07:30:19 -->|| Got 592 bytes on wire -- Total buffer size: 65535 2023/07/07 07:30:19 ||--> Sending HEP OUT successful without filters to 192.168.10.14:9062 2023/07/07 07:30:19 -->|| Got 238 bytes on wire -- Total buffer size: 65535 2023/07/07 07:30:19 ||--> Sending HEP OUT successful without filters to 192.168.10.14:9062 2023/07/07 07:30:20 -->|| Got 447 bytes on wire -- Total buffer size: 65535 2023/07/07 07:30:20 ||--> Sending HEP OUT successful without filters to 192.168.10.14:9062 2023/07/07 07:30:20 -->|| Got 787 bytes on wire -- Total buffer size: 65535 2023/07/07 07:30:20 ||--> Sending HEP OUT successful without filters to 192.168.10.14:9062 2023/07/07 07:30:20 -->|| Got 267 bytes on wire -- Total buffer size: 65535 2023/07/07 07:30:20 ||--> Sending HEP OUT successful without filters to 192.168.10.14:9062 2023/07/07 07:30:20 -->|| Got 605 bytes on wire -- Total buffer size: 65535 2023/07/07 07:30:20 ||--> Sending HEP OUT successful without filters to 192.168.10.14:9062 2023/07/07 07:30:21 -->|| Got 466 bytes on wire -- Total buffer size: 65535 2023/07/07 07:30:21 ||--> Sending HEP OUT successful without filters to 192.168.10.14:9062 2023/07/07 07:30:21 -->|| Got 1370 bytes on wire -- Total buffer size: 65535 2023/07/07 07:30:21 ||--> Sending HEP OUT successful without filters to 192.168.10.14:9062 2023/07/07 07:30:21 -->|| Got 898 bytes on wire -- Total buffer size: 65535 2023/07/07 07:30:21 ||--> Sending HEP OUT successful without filters to 192.168.10.14:9062 2023/07/07 07:30:21 -->|| Got 629 bytes on wire -- Total buffer size: 65535 2023/07/07 07:30:21 ||--> Sending HEP OUT successful without filters to 192.168.10.14:9062 2023/07/07 07:30:22 -->|| Got 447 bytes on wire -- Total buffer size: 65535 2023/07/07 07:30:22 ||--> Sending HEP OUT successful without filters to 192.168.10.14:9062 2023/07/07 07:30:22 -->|| Got 730 bytes on wire -- Total buffer size: 65535 2023/07/07 07:30:22 ||--> Sending HEP OUT successful without filters to 192.168.10.14:9062 2023/07/07 07:30:22 -->|| Got 1098 bytes on wire -- Total buffer size: 65535 2023/07/07 07:30:22 ||--> Sending HEP OUT successful without filters to 192.168.10.14:9062 2023/07/07 07:30:22 -->|| Got 1274 bytes on wire -- Total buffer size: 65535 2023/07/07 07:30:22 ||--> Sending HEP OUT successful without filters to 192.168.10.14:9062 2023/07/07 07:30:23 -->|| Got 515 bytes on wire -- Total buffer size: 65535 2023/07/07 07:30:23 ||--> Sending HEP OUT successful without filters to 192.168.10.14:9062 2023/07/07 07:30:23 -->|| Got 501 bytes on wire -- Total buffer size: 65535 2023/07/07 07:30:23 ||--> Sending HEP OUT successful without filters to 192.168.10.14:9062 2023/07/07 07:30:23 -->|| Got 547 bytes on wire -- Total buffer size: 65535 2023/07/07 07:30:23 ||--> Sending HEP OUT error: write tcp4 192.168.10.14:42496->192.168.10.14:9062: write: broken pipe 2023/07/07 07:30:23 ||-->X File Send HEP from buffer to file error read tcp4 192.168.10.14:9060->192.168.10.27:57982: use of closed network connection

ivlovric commented 1 year ago

Hi! this is kind of known issue but it should be mitigated in branch “next” where lots of code is refactored and cleaned so feel free to checkout that branch and report back. I hope it will be merged to master branch soon as there is already testing in large HEP environment.

thanks

bilalrao12 commented 1 year ago

Hi,

I tried to compile this "next" branch with go version 1.18.2 but getting below error.

[root@centos HFP-next]# make go build -ldflags "-s -w" -o HFP *.go

command-line-arguments

./HFP.go:66:4: undefined: connectionStatus ./HFP.go:71:4: undefined: connectionStatus ./HFP.go:119:3: undefined: clientLastMetricTimestamp ./HFP.go:144:8: undefined: connectionStatus ./HFP.go:156:10: undefined: connectionStatus ./HFP.go:212:7: undefined: connectionStatus ./HFP.go:223:9: undefined: connectionStatus ./HFP.go:244:6: undefined: connectionStatus ./HFP.go:255:8: undefined: connectionStatus ./HFP.go:338:6: undefined: hepBytesInFile ./HFP.go:338:6: too many errors make: *** [all] Error 2

Can you please help on this?

ivlovric commented 1 year ago

Hi, Thanks for trying this. I just successfully built this branch after clean clone with same command.

Can you please check working directory before issuing "make" command? You need go inside of cloned/pulled directory with this content: Somehow it looks that you are missing prometheus.go file

drwxr-xr-x@ 3 ilovric staff 96B Aug 2 15:04 .. drwxr-xr-x 12 ilovric staff 384B Aug 2 15:04 .git -rw-r--r-- 1 ilovric staff 320B Aug 2 15:04 Dockerfile -rwxr-xr-x 1 ilovric staff 9.0M Aug 2 15:05 HFP -rw-r--r-- 1 ilovric staff 14K Aug 2 15:04 HFP.go -rw-r--r-- 1 ilovric staff 109B Aug 2 15:04 Makefile -rw-r--r-- 1 ilovric staff 2.8K Aug 2 15:04 README.md -rw-r--r-- 1 ilovric staff 150B Aug 2 15:04 go.mod -rw-r--r-- 1 ilovric staff 13K Aug 2 15:04 go.sum -rw-r--r-- 1 ilovric staff 6.0K Aug 2 15:04 hep.go -rw-r--r-- 1 ilovric staff 1.6K Aug 2 15:04 prometheus.go

Thanks

bilalrao12 commented 1 year ago

Thanks for pointing out. I have compiled this successfully and seems it resolved the reconnecting issue. I will run more tests.

kYroL01 commented 1 year ago

Hi @ivlovric we successfully test this reconnection issue that we found, and the next branch works fine for us. You can merge to master when you can :)

ivlovric commented 1 year ago

Thank you all for your feedback. Merging this to master soon :)