I'd like to know if there is an element, or possibly a set of elements,
or a library API, that can help with providing smooth and robust
timestamping for recoding Gstreamer pipelines.
The two main needs I'm thinking for recordings are:
1. Provide an initial timestamp of 0:00:00 in the output file,
regardless of what was the running time of the pipeline when recording
2. Provide a smooth, increasing timestamp on the output, regardless of
missing frames or gaps on the input.
An example of when (1) is needed, is when recording starts at a later
time after the pipeline started. An example of (2) is wanting to store
media from an incoming RTP stream which suffers some packet loss
(typically without re-encoding, mind no decoding should be needed just
to capture and store a remote stream).
These two things won't happen just by adding a filesink at the end of
the typical playback pipeline, so the easy and commonly found solution
to creating output files is clearly insufficient. Also, maybe there are
other important aspects to worry about, that I'm not even realizing that
must be considered (so please feel free to comment).
I understand covering 100% of cases is complex, but maybe GStreamer is
lacking some guidance for the proverbial 80%. I already got a somewhat
working solution, but in the process I realized that users going from a
simple playback pipeline (which can be achieved with just a gst-launch
incantation) to a recording one (and doing it properly) will face a
sudden learning curve that is pretty steep, having to familiarize
themselves with all the different clocks and timers, events, low-level
buffer handling, what are PTS, DTS, and their differences, what is the
appropriate way of interpreting each one, etc.
So I thought that instead of reinventing the wheel on my own and
possibly missing some crucial details in the way, it would be nice to
collect some info about this topic, unless there is already some blog
post or conversation that already covers all this in a comprehensive
manner (if so I'd love to have a look at it!). Otherwise I'd probably
end up writing a post about this, to aid myself and other people in the
On 12/10/20 17:50, Juan Navarro wrote:
> I'd like to know if there is an element, or possibly a set of
> elements, or a library API, that can help with providing smooth and
> robust timestamping for recoding Gstreamer pipelines.
> The two main needs I'm thinking for recordings are:
> 1. Provide an initial timestamp of 0:00:00 in the output file,
> regardless of what was the running time of the pipeline when recording
I'm wondering if this should be done separately for audio and video, or
doing it for just the first buffer would be enough.
For 1 video & 1 audio track, the code I inherited would monitor both
branches of the pipeline, and store as "offset" the PTS of the first
buffer passes through towards the filesink, regardless of which one it
is (video or audio, typically audio). Then, it subtracts that time from
the PTS of all the following buffers (both video and audio).
This alone raises several questions already:
* Assuming both PTS and DTS are available, and not wanting to assume if
B-frames will be present or not: Should the PTS be stored as offset? Or
the DTS? Or none, and instead use the pipeline's running time?
* In case of using PTS or DTS as offset: Is using a single offset enough
(from the very first frame that passes through)? Or should individual
offsets be used for each track (an audio offset, and a separate video