Receiver pipeline fails to goto playing state in streaming audio scenario

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view

Receiver pipeline fails to goto playing state in streaming audio scenario


We have built an  application in where a sender streams audio to multiple receivers over a network
where we use this approach:


Setup netclock provider (server)
Use system clock for that and the pipeline
ntp-time-source”: clock-time


Setup netclient clock with sender's server
Use that for the pipeline and set 500ms fixed latency
ntp-time-source”: clock-time, “ntp-sync”: TRUE, “buffer-mode”: synced

based on this presentation:

We are experiencing an issue during setup where the receiver pipeline fails to go to Playing state due to the rtpjitterbuffer either throwing away all data or buffering it. This prevents the rtpbin from creating a src-pad and we cannot link our pipeline. This happens maybe around 20% of the time.

Using buffer-mode: synced the rtpjitterbuffer should use the first recieved buffers timestamp as base time, could it be that this mechanism fails for some reason causing the jitterbuffer to believe that all packets are either late or should be played some time in the future?
Could there be another explanation for this issue?

Can mention that our receiving pipeline uses a short jitterbuffer and and a short total pipe latency in order to achieve low latency streaming.

If we wanted to manage base time ourselves by getting the timeproviders current time stamp and distributing/applying this time stamp to the receivers using these calls:
gst_pipeline_use_clock (clock);
gst_element_set_start_time(pipeline, GST_CLOCK_TIME_NONE);
gst_element_set_base_time(pipeline, base_time);
, should we also apply this time stamp on the sender pipeline?

Which buffer-mode should we use if we want to manage base time ourselves?

Do we need to alter any of the other above mentioned parameters if we want to manage base time ourselves?