approach for mcu with audio mixer

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view

approach for mcu with audio mixer

This post was updated on .

I want to implement a audio conference application where each participant
sends a stream of *webrtc(browser[peerConnection(sendonly)])* audio /to/
*gstreamer(webrtcbin receiver here)* and all audio streams are mixed* (audio
mixer here)* and one combined audio stream is sent *(webrtcbin sender here)*
/to/ each participant *webrtc(browser [peerConnection(recvonly)]).*

The problem I see is that using this approach each participant will receive
full mixed audio which also has the audio which is sent by the participant
itself causing inconvenience.

Basically if there are 3 participants 1,2,3 in MCU,

1 - listens audio from 2,3 and also 1's own
2 - listens audio from 1,3 and also 2's own
3 - listens audio from 2,1 and also 3's own

I want to avoid this where participant only listens audio of all other
participants except audio from himself

Is there an approach to implement this kind of use case?

I found a similar example at

but if its the only way or better approach is more welcome

I was thinking if I can somehow filter / remove audio for each
webrtcbin(while sending) after audio mixer which came from same participant,
but not sure if its a valid/actual approach

Please suggest some way of doing this or any pointers to start with...


Sent from:
gstreamer-devel mailing list