Currently, we don't really differentiate between audio and video tracks when doing RTP packet forwarding. This does not really harm but causes computations [for audio tracks] that are not necessary (that just consume more CPU than needed).
We should not forward RTP tracks for the audio, but instead, we could create a local track for each audio track and add the same track to each peer who subscribes to the audio feed of others (as we did it prior to the simulcast implementation).
That would spare us some CPU usage and simplify a code at certain places a bit.
Currently, we don't really differentiate between audio and video tracks when doing RTP packet forwarding. This does not really harm but causes computations [for audio tracks] that are not necessary (that just consume more CPU than needed).
We should not forward RTP tracks for the audio, but instead, we could create a local track for each audio track and add the same track to each peer who subscribes to the audio feed of others (as we did it prior to the simulcast implementation).
That would spare us some CPU usage and simplify a code at certain places a bit.