what is the best practice keeping two live sources in sync.
On the sender side, there is a camera producing h264 video and raw pcm
audio. The audio is opus encoded and both are send over udp ( using rtpbin
with a rtpsession for each source)
On the receiver side there's a rtpbin and rtpsession/rtpjitterbuffer for
each source. Both audio and video are decoded in to raw pcm and jpeg. At the
end, there's a app sink for each source.
How do I handle the timestamps on the receiver ? The sender will make Rtp
timestamps using the clock in the pipeline I think ?
Can anyone explain ??
You should be able to share the clock between the two pipelines that are
doing the transcoding, the receivers should then play the two streams at
the same time. But there are a lot of variables involved, more than
likely you would play both streams as the transcoder "saw" them. If one
source is inherently higher latency than the other, they might not
Alternatively you could just make both of the pipelines inside of one
pipeline, that should handle the syncing for the transcoder. Assuming
you don't want to control both streams independently.
On the sender side both audio and video are inside the same pipeline, so they
should be in sync (ignoring latency at this point).
The receiver has a RtpBin wich dynamically constructs RtpSession,
RtpJitterBuffer and RtpRtxReceive for both rtp streams. At this point, I
should be able to correlate the two Rtp streams by their timestamps I guess