GStreamer clocks and latency calculation

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

GStreamer clocks and latency calculation

Bob Barker

We’re new to GStreamer and are writing our own elements.  We’re trying to understand which clocks are used for jitter calculation. From the GStreamer documentation:

http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-qos.txt

“A buffer with timestamp B1 arrives in the sink at time T1. The buffer

timestamp is then synchronized against the clock which yields a jitter J1

return value from the clock. The jitter J1 is simply calculated as

  J1 = CT - B1

Where CT is the clock time when the entry arrives in the sink. This value

is calculated inside the clock when we perform gst_clock_entry_wait().

 

If the jitter is negative, the entry arrived in time and can be rendered

after waiting for the clock to reach time B1 (which is also CT - J1).

 

If the jitter is positive however, the entry arrived too late in the sink

and should therefore be dropped. J1 is the amount of time the entry was late.”

 

 

We assume the clock is the monotonically increasing wall clock, not the running time nor the stream time shown ins section  14.4 of the GStreamer application development manual. From the documentation the running time always starts at zero so the running time and stream time are always less than the clock time.

 

In the GStreamer clocks diagram (and the real-world) the clock time is always greater than stream time and the running time. Given that and the equation above (  J1 = CT – B1) , jitter will always be positive. According to GStreamer documentation “If the jitter is positive however, the entry arrived too late in the sink and should therefore be dropped. “  

 

Questions:

1.       So all buffers will always be late and should be dropped?!  What are we missing?

 

2.       Which timestamp is the documentation referring to?    GStreamer calls out one in the Linux version of  gstbuffer.h:“  GstClockTime           timestamp;” and  two in the windows version: pts and gts.

 

 * @pts: presentation timestamp of the buffer, can be #GST_CLOCK_TIME_NONE when the

 *     pts is not known or relevant. The pts contains the timestamp when the

 *     media should be presented to the user.

 * @dts: decoding timestamp of the buffer, can be #GST_CLOCK_TIME_NONE when the

 *     dts is not known or relevant. The dts contains the timestamp when the

 *     media should be processed

       GstClockTime           pts;

       GstClockTime           dts;

 

3.       Which of the three clocks (clocktime, running time, stream time) are the pts and dts referring to?

4.       It seems the correct jitter formula is:

 

                     Jitter  = buffer_arrival_time - base_time – buffer_time_stamp    

 

                      Where buffer_arrival_time is read from the system clock.

 

Thanks.


_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: GStreamer clocks and latency calculation

Tim-Philipp Müller-2
On Fri, 2013-10-04 at 12:41 -0700, Bob Barker wrote:

Hi,

> We’re new to GStreamer and are writing our own elements.  We’re trying
> to understand which clocks are used for jitter calculation. From the
> GStreamer documentation:
>
> http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-qos.txt
>
> “A buffer with timestamp B1 arrives in the sink at time T1. The buffer
> timestamp is then synchronized against the clock which yields a jitter
> J1 return value from the clock. (snip)"
>  
>
> We assume the clock is the monotonically increasing wall clock, not
> the running time nor the stream time shown ins section  14.4 of the
> GStreamer application development manual. From the documentation the
> running time always starts at zero so the running time and stream time
> are always less than the clock time.
>
> In the GStreamer clocks diagram (and the real-world) the clock time is
> always greater than stream time and the running time. Given that and
> the equation above (  J1 = CT – B1) , jitter will always be positive.
> According to GStreamer documentation “If the jitter is positive
> however, the entry arrived too late in the sink and should therefore
> be dropped. “
>
> Questions:
>
> 1.      So all buffers will always be late and should be dropped?!
>  What are we missing?

It sounds like maybe you're taking these docs a little too literal. The
clock time is not necessarily the actual value gst_clock_get_time()
returns, and the buffer time stamp will likely be adjusted for things
like the configured latency, and converted into running time for
synchronization purposes.

Things also depend a bit on the exact pipeline, and what elements are
involved. If you have filesrc ! parse ! dec ! audiosink and the pipeline
selects the audio clock from the sink, then that clock will be based on
audio samples rendered (and some of your assumptions about the real
world are not necessarily true any more).


> 2.      Which timestamp is the documentation referring to?
> GStreamer calls out one in the Linux version of  gstbuffer.h:“
> GstClockTime           timestamp;” and  two in the windows version:
> pts and gts.

It depends on the circumstances. Usually you will sync on pts, but there
may be cases where one would sync on dts if it is available, e.g. for
encoded content.

There is no difference between "the Linux version" and "the Windows
version". What you're seeing is the old 0.10 version vs. the new 1.0
version, which have different APIs.


> 3.      Which of the three clocks (clocktime, running time, stream
> time) are the pts and dts referring to?

Those are not "three clocks", there is only one clock. pts/dts are
"stream time", their actual values need to be used in connection with
the SEGMENT to transform those into running time for synchronization
purposes.


> 4.      It seems the correct jitter formula is:
>  Jitter  = buffer_arrival_time - base_time – buffer_time_stamp    
> Where buffer_arrival_time is read from the system clock.

I believe the clock time as mentioned in the document is supposed to be
already adjusted like that (i.e. time pipeline spent in playing state
according to the selected clock) and not an actual clock time value.

Cheers
 -Tim

_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: GStreamer clocks and latency calculation

Bob Barker
Thanks Tim. You're right: we are taking the documentation  literally because we're new to GSteamer and need to write our own elements.  Without a tutorial, there's nothing else to on.
 
 
A few more questions:
 
Is the decoding timestamp the time the buffer actually was decoded (after the decoding has finished) or the time it should be decoded (as in "this needs to be decoded at <some point in the future>)
 
 
Same question as above  about presentation timestamp.
 
 
 
On latency the documentation says:

"14.6.1. Latency compensation

Before the pipeline goes to the PLAYING state, it will, in addition to selecting a clock and calculating a

base-time, calculate the latency in the pipeline. It does this by doing a LATENCY query on all the sinks

in the pipeline. The pipeline then selects the maximum latency in the pipeline and configures this with a

LATENCY event.

All sink elements will delay playback by the value in the LATENCY event. Since all sinks delay with the

same amount of time, they will be relative in sync."

 

When does each element calculate its latency and when does the  pipeline issue the latency  query?  Doign it in the playing state seems too late. Does it happen in the preroll state? Some buffers have to be processed by every element to determine where the longest latency is.



On Sat, Oct 5, 2013 at 2:42 AM, Tim-Philipp Müller <[hidden email]> wrote:
On Fri, 2013-10-04 at 12:41 -0700, Bob Barker wrote:

Hi,

> We’re new to GStreamer and are writing our own elements.  We’re trying
> to understand which clocks are used for jitter calculation. From the
> GStreamer documentation:
>
> http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-qos.txt
>
> “A buffer with timestamp B1 arrives in the sink at time T1. The buffer
> timestamp is then synchronized against the clock which yields a jitter
> J1 return value from the clock. (snip)"
>
>
> We assume the clock is the monotonically increasing wall clock, not
> the running time nor the stream time shown ins section  14.4 of the
> GStreamer application development manual. From the documentation the
> running time always starts at zero so the running time and stream time
> are always less than the clock time.
>
> In the GStreamer clocks diagram (and the real-world) the clock time is
> always greater than stream time and the running time. Given that and
> the equation above (  J1 = CT – B1) , jitter will always be positive.
> According to GStreamer documentation “If the jitter is positive
> however, the entry arrived too late in the sink and should therefore
> be dropped. “
>
> Questions:
>
> 1.      So all buffers will always be late and should be dropped?!
>  What are we missing?

It sounds like maybe you're taking these docs a little too literal. The
clock time is not necessarily the actual value gst_clock_get_time()
returns, and the buffer time stamp will likely be adjusted for things
like the configured latency, and converted into running time for
synchronization purposes.

Things also depend a bit on the exact pipeline, and what elements are
involved. If you have filesrc ! parse ! dec ! audiosink and the pipeline
selects the audio clock from the sink, then that clock will be based on
audio samples rendered (and some of your assumptions about the real
world are not necessarily true any more).


> 2.      Which timestamp is the documentation referring to?
> GStreamer calls out one in the Linux version of  gstbuffer.h:“
> GstClockTime           timestamp;” and  two in the windows version:
> pts and gts.

It depends on the circumstances. Usually you will sync on pts, but there
may be cases where one would sync on dts if it is available, e.g. for
encoded content.

There is no difference between "the Linux version" and "the Windows
version". What you're seeing is the old 0.10 version vs. the new 1.0
version, which have different APIs.


> 3.      Which of the three clocks (clocktime, running time, stream
> time) are the pts and dts referring to?

Those are not "three clocks", there is only one clock. pts/dts are
"stream time", their actual values need to be used in connection with
the SEGMENT to transform those into running time for synchronization
purposes.


> 4.      It seems the correct jitter formula is:
>  Jitter  = buffer_arrival_time - base_time – buffer_time_stamp
> Where buffer_arrival_time is read from the system clock.

I believe the clock time as mentioned in the document is supposed to be
already adjusted like that (i.e. time pipeline spent in playing state
according to the selected clock) and not an actual clock time value.

Cheers
 -Tim

_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel


_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: GStreamer clocks and latency calculation

David Röthlisberger
On 8 Oct 2013, at 16:23, Bob Barker wrote:
>
> You're right: we are taking the documentation  literally because we're new to GSteamer and need to write our own elements.  Without a tutorial, there's nothing else to on.

I must say I'm looking forward to Edward Hervey's talk at the upcoming
GStreamer conference:
http://gstreamer.freedesktop.org/conference/2013/speakers.html#hervey-time

_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: GStreamer clocks and latency calculation

Edward Hervey-4
In reply to this post by Bob Barker
Hi,

On Tue, 2013-10-08 at 08:23 -0700, Bob Barker wrote:

> Thanks Tim. You're right: we are taking the documentation  literally
> because we're new to GSteamer and need to write our own elements.
> Without a tutorial, there's nothing else to on.
>  
>  
> A few more questions:
>  
> Is the decoding timestamp the time the buffer actually was decoded
> (after the decoding has finished) or the time it should be decoded (as
> in "this needs to be decoded at <some point in the future>)

  Neither in GStreamer.

  Unless you explicitly synchronize against the clock in your decoders
(hint: don't) the buffer PTS/DTS are just used as relative timestamps.

  => Relative against previous/future buffers of that stream (will this
buffer be decoded before/after this other one, should this buffer be
presented before/after this other one).
  => Relative against buffers of other streams (should this buffer be
presented before after this buffer from another stream).

  We don't really "use" DTS in GStreamer except for:
  * carrying it around and storing it in formats that require it
  * using it to extrapolate missing PTS in parsers if needed

  All the various synchronization algorithms will be using the PTS.

>  
>  
> Same question as above  about presentation timestamp.
>  
>  
>  
> On latency the documentation says:
>
> "14.6.1. Latency compensation
>
> Before the pipeline goes to the PLAYING state, it will, in addition to
> selecting a clock and calculating a
>
> base-time, calculate the latency in the pipeline. It does this by
> doing a LATENCY query on all the sinks
>
> in the pipeline. The pipeline then selects the maximum latency in the
> pipeline and configures this with a
>
> LATENCY event.
>
> All sink elements will delay playback by the value in the LATENCY
> event. Since all sinks delay with the
>
> same amount of time, they will be relative in sync."
>
>  
>
> When does each element calculate its latency and when does the
> pipeline issue the latency  query?  Doign it in the playing state
> seems too late. Does it happen in the preroll state? Some buffers have
> to be processed by every element to determine where the longest
> latency is.

  First of all the latency is only *used* when you have at least a live
source AND a live sink.

  Furthermore, note that in GStreamer the latency is different from the
processing delay. Latency is something which is independent from the
processing power you have available, and corresponds to the
minimum/maximum latency introduced by an element. It needs to get input
with PTS X+lat in order to push out buffer with PTS X. (ex: a decoder
with frame reordering will have/report the same latency whether you have
a 1GHz cpu, a 4GHz cpu or even a HW-accelerated chip). For source
elements, which don't have input buffers, the same reporting is done
(regardless of your processing speed, a webcam will always introduce
(crap hardware put aside) a latency equal to the duration of a frame.
Etc...). Finally some elements might also purposefully introduce some
latency for proper behaviour (the rtp jitterbuffer allows buffers to
arrive up to X ns late in order to do reordering).

  You are right in stating that some elements might require some buffers
(or rather headers and/or configuration information) to know how much
latency it will introduce (taking the decoder example above, you need to
read the header/codec_info to know how much frame reordering will be
introduced, for the webcam we need to know the configured
framerate, ....). But that calculation will happen either before or
just-when pushing out the first buffer.

  To finish off, elements can also notify changes in latency during
playback by posting a GST_MESSAGE_LATENCY which applications should then
use to trigger the recalculatation/redistribution of the pipeline
latency.

  Look for GST_QUERY_LATENCY in existing plugins to know how/what they
respond.

  More details at the GStreamer conference...

     Edward

>
>
>
>
> On Sat, Oct 5, 2013 at 2:42 AM, Tim-Philipp Müller <[hidden email]>
> wrote:
>         On Fri, 2013-10-04 at 12:41 -0700, Bob Barker wrote:
>        
>         Hi,
>        
>         > We’re new to GStreamer and are writing our own elements.
>          We’re trying
>         > to understand which clocks are used for jitter calculation.
>         From the
>         > GStreamer documentation:
>         >
>         >
>         http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-qos.txt
>         >
>         > “A buffer with timestamp B1 arrives in the sink at time T1.
>         The buffer
>         > timestamp is then synchronized against the clock which
>         yields a jitter
>        
>         > J1 return value from the clock. (snip)"
>         >
>         >
>         > We assume the clock is the monotonically increasing wall
>         clock, not
>         > the running time nor the stream time shown ins section  14.4
>         of the
>         > GStreamer application development manual. From the
>         documentation the
>         > running time always starts at zero so the running time and
>         stream time
>         > are always less than the clock time.
>         >
>         > In the GStreamer clocks diagram (and the real-world) the
>         clock time is
>         > always greater than stream time and the running time. Given
>         that and
>         > the equation above (  J1 = CT – B1) , jitter will always be
>         positive.
>         > According to GStreamer documentation “If the jitter is
>         positive
>         > however, the entry arrived too late in the sink and should
>         therefore
>         > be dropped. “
>         >
>         > Questions:
>         >
>         > 1.      So all buffers will always be late and should be
>         dropped?!
>         >  What are we missing?
>        
>        
>         It sounds like maybe you're taking these docs a little too
>         literal. The
>         clock time is not necessarily the actual value
>         gst_clock_get_time()
>         returns, and the buffer time stamp will likely be adjusted for
>         things
>         like the configured latency, and converted into running time
>         for
>         synchronization purposes.
>        
>         Things also depend a bit on the exact pipeline, and what
>         elements are
>         involved. If you have filesrc ! parse ! dec ! audiosink and
>         the pipeline
>         selects the audio clock from the sink, then that clock will be
>         based on
>         audio samples rendered (and some of your assumptions about the
>         real
>         world are not necessarily true any more).
>        
>        
>         > 2.      Which timestamp is the documentation referring to?
>         > GStreamer calls out one in the Linux version of
>          gstbuffer.h:“
>         > GstClockTime           timestamp;” and  two in the windows
>         version:
>         > pts and gts.
>        
>        
>         It depends on the circumstances. Usually you will sync on pts,
>         but there
>         may be cases where one would sync on dts if it is available,
>         e.g. for
>         encoded content.
>        
>         There is no difference between "the Linux version" and "the
>         Windows
>         version". What you're seeing is the old 0.10 version vs. the
>         new 1.0
>         version, which have different APIs.
>        
>        
>         > 3.      Which of the three clocks (clocktime, running time,
>         stream
>         > time) are the pts and dts referring to?
>        
>        
>         Those are not "three clocks", there is only one clock. pts/dts
>         are
>         "stream time", their actual values need to be used in
>         connection with
>         the SEGMENT to transform those into running time for
>         synchronization
>         purposes.
>        
>        
>         > 4.      It seems the correct jitter formula is:
>         >  Jitter  = buffer_arrival_time - base_time –
>         buffer_time_stamp
>         > Where buffer_arrival_time is read from the system clock.
>        
>        
>         I believe the clock time as mentioned in the document is
>         supposed to be
>         already adjusted like that (i.e. time pipeline spent in
>         playing state
>         according to the selected clock) and not an actual clock time
>         value.
>        
>         Cheers
>          -Tim
>        
>         _______________________________________________
>         gstreamer-devel mailing list
>         [hidden email]
>         http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
>
>
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel


_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: GStreamer clocks and latency calculation

Tim-Philipp Müller-2
In reply to this post by Bob Barker
On Tue, 2013-10-08 at 08:23 -0700, Bob Barker wrote:

Hi,

> You're right: we are taking the documentation  literally because we're
> new to GSteamer and need to write our own elements.  Without a
> tutorial, there's nothing else to on.

It wasn't meant to be a criticism, more of a heads-up that the design
docs often describe concepts/designs/background and not every single
detail involved. It's often good to read it in combination with the code
in question, here the GstBaseSink code.
 
 Cheers
  -Tim


_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel