OpenNetLab / AlphaRTC

Evaluation framework for RL-based bitrate control for WebRTC
BSD 3-Clause "New" or "Revised" License
94 stars 37 forks source link

Unexpected behavior executing receiver_pyinfer.json and sender_pyinfer.json via network namespace #105

Closed HoustonHuff closed 1 year ago

HoustonHuff commented 1 year ago

I'm currently attempting to run the demo peerconnection_serverless using cmdinfer, hosted between a network namespace bridge and node. I've configured the destination IPs for receiver_pyinfer.json and sender_pyinfer.json as I had done for their onnx counterparts with success, but a different process seems to now be taking place through cmdinfer that ends in connection termination and outvideo.yuv generating as an empty 0 byte file, which goes as follows:

Below is a copy of the receiver-side log file. As the sender does not appear to log its console reports, I will do my best to relay back what is being reported as requested or seems relevant to help with tracing back the problem. webrtc.log

jeongyooneo commented 1 year ago

Hi @HoustonHuff, thanks for the report!

1. Troubleshooting the problem of outvideo.yuv being 0 bytes:

To investigate the call issue further, let's try the following:

2. Other issues:

I'm currently attempting to run the demo peerconnection_serverless using cmdinfer, hosted between a network namespace bridge and node. I've configured the destination IPs for receiver_pyinfer.json and sender_pyinfer.json as I had done for their onnx counterparts with success

Making a working connection:

Receiver appears to be processing stream frames; however, sender terminates and destroys the connection session

The call length is pre-configured in autoclose option in receiver- and sender-side JSON, which is by default 20 seconds. You can adjust this number to set your call length.

HoustonHuff commented 1 year ago

Okay, so I've performed the following steps:

So it looks like the reported total_requested_padding_bitrate is indeed still at 0 Mbps. I'm still exploring the sender logs, but I'll share the new ones here so you can look at them too.

webrtc_receiver.log webrtc_sender.log

jeongyooneo commented 1 year ago

@HoustonHuff From the sender-side log, it seems the bandwidth is kept constant at 300Kbps throughout the call. Let's do a pair debugging session to fix this together. I'll send you an invitation soon.

jeongyooneo commented 1 year ago

The main issue was lack of official guide on running docker-free pyinfer-based bandwidth estimator. This is addressed in #102, which will be merged after removing the GCC-related parts.

Mrxiangli commented 4 months ago

Is this issue fixed? I'm encountering the same issue currently.

HoustonHuff commented 4 months ago

Hi @Mrxiangli, I've recently begun the work on re-creating my old build last year after an OS upgrade and I seem to be with you on facing these issues again on running bare metal, though my focus has more been on trying to build peerconnection_serverless rather than use the example one so I have a starting point for modifying it; apparently right now with that 'peerconnection' is my only valid build target pulled from 'gn gen out/Default' rather than 'peerconnection_serverless' and even that has build errors I'm still working to parse.

Mrxiangli commented 4 months ago

Thanks for the response @HoustonHuff, could you shed some lights on the issue you fixed last time? when I dive into the log and play with the script, it seems that the bandwidth_estimator doesn't get called correctly, and the default bandwidth(1Mbps) in the scripts never take effect. For example, if I set the report interval to 3s instead of the default 200 ms, more frames can be transmitted before the message "no keyframe detected .. ", which makes me feel that the python script is not being called properly.

HoustonHuff commented 4 months ago

My work in troubleshooting from last year had the assistance of an author and I can't quite remember all of what was discussed, but going back through old emails I did find this note that I took for a colleague at the time:

"...the issue was that running on raw build doesn't take advantage of Docker ensuring files are moved to the right place, also that the peerconnection_serverless that needed to be used was actually under examples/peerconnection/serverless and the one in out/Default had to be converted to a .origin file as a bridge between C++ and Python, and examples/peerconnection/serverless/peerconnection_serverless, out/Default/peerconnection_serverless.origin, and modules/third_party/cmdinfer/cmdinfer.py needed to be run in the same directory with peerconnection_serverless pointing with an absolute path to peerconnection_serverless.origin."

The issues that I had noted were these:

Right now I can't speak to the reporting and frame intervals as I'm still getting cmdinfer set back up right just for the import to work but I'll try to share more to help if I find it.

HoustonHuff commented 4 months ago

Have you had success with building the out/Default instance of peerconnection_serverless from scratch? I find it very odd that 'peerconnection' is my only build option rather than 'peerconnection_serverless' from what I fetched from gn gen, and although I was able to get past what difficulties I had with ninja eventually I'd gotten it built before but can't find the fix for that now.

Mrxiangli commented 4 months ago

Yes, I did successfully build it from scratch by following the readme, and it seems the onnx model based controller works as well, just pyinfer issue. I tried it on both ubunutu18 and ubuntu20, and the build itself does not give any issue. Although with ubuntu 18, there is that lru_cache problem, but that can be solved by updating python to 3.8+

ammartahir24 commented 1 month ago

I had the same issue, while I was not able to make the cmdInfer run using the Python file, you can go to cmdinfer.cc and implement whatever you needed to implement in the Python file there. If your use case does not particularly require some functionality explicitly from Python, this should help (and I imagine this might be more performant as well).