a/v freeze while flushing audio chain of pipeline in running state (clock and renderer sync related)

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

a/v freeze while flushing audio chain of pipeline in running state (clock and renderer sync related)

Nitin
Hello folks,

I'm observing a/v freeze (gstreamer 1.4.5).
sequence of operations:
- a/v playback ongoing (pipeline state is in PLAYING)

On audio src pad of demux:
- push gst_event_new_flush_start()
- push gst_event_new_flush_stop(FALSE)
- push new segment event
- push buffers

A/V freeze (random)
the idea is to flush gstreamer pipeline for the audio chain before pushing buffers of different language of same codec.

It seems that the pipeline is blocked on basesink for video chain.
The clock ID object for video renderer suggests, gst_system_clock_id_wait_jitter_unlocked() named GstPulseSinkClock
(gdb) p *clock
$72 =

{
 object =
 {
     object = {g_type_instance = {g_class = 0xac707c58}, ref_count = 46, qdata = 0x0}, lock = {p = 0xac709f70, i = {2893062000, 0}},
     name = 0xac705520 "GstPulseSinkClock",
     parent = 0x0,
     flags = 752,
     control_bindings = 0x0,
     control_rate = 100000000,
     last_sync = 18446744073709551615,
     _gst_reserved = 0x0
 },
 priv = 0xad167828,
 _gst_reserved = {0x0, 0x0, 0x0, 0x0}
 }
 
So far not able to make out how clock management and sync with renderer is taking place.
But below two threads seem related as it semms that video renderer is polling for some event from audio renderer ?
ANy suggestion will be more that helpful

Thread
#0  0xb7fdd428 in __kernel_vsyscall ()
#1  0xb7ab5108 in __GI_ppoll (fds=0x9fe12c48, nfds=1, timeout=0xa07fe1d8, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:56
#2  0xb7d76f29 in ppoll (__ss=0x0, __timeout=<optimized out>, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/i386-linux-gnu/bits/poll2.h:77
#3  gst_poll_wait (set=0xac703c58, timeout=23516000) at gstpoll.c:1248
#4  0xb7d8d76d in gst_system_clock_id_wait_jitter_unlocked (clock=0xad167890, entry=0xac731948, jitter=0xa07fe650, restart=1) at gstsystemclock.c:684
#5  0xb7d3c1aa in gst_clock_id_wait (id=0xac731948, jitter=jitter@entry=0xa07fe650) at gstclock.c:539
#6  0xb77f1930 in gst_base_sink_wait_clock (sink=sink@entry=0xadc0cfd8, time=<optimized out>, jitter=jitter@entry=0xa07fe650) at gstbasesink.c:2104
#7  0xb77f3317 in gst_base_sink_do_sync (basesink=basesink@entry=0xadc0cfd8, obj=obj@entry=0x9fe18360, late=late@entry=0xa07fe7d8, step_end=step_end@entry=0xa07fe7dc) at gstbasesink.c:2491
#8  0xb77f484e in gst_base_sink_chain_unlocked (basesink=basesink@entry=0xadc0cfd8, obj=obj@entry=0x9fe18360, is_list=is_list@entry=0, pad=<optimized out>) at gstbasesink.c:3390
#9  0xb77f69a8 in gst_base_sink_chain_main (basesink=0xadc0cfd8, pad=<optimized out>, obj=0x9fe18360, is_list=0) at gstbasesink.c:3538
#10 0xb7fb8a05 in gst_validate_pad_monitor_chain_func (pad=0xad166500, parent=0xadc0cfd8, buffer=0x9fe18360) at gst-validate-pad-monitor.c:1509
#11 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9fe18360, type=<optimized out>, pad=0xad166500) at gstpad.c:3830
#12 gst_pad_push_data (pad=0xac727578, type=type@entry=4112, data=0x9fe18360) at gstpad.c:4063
#13 0xb7d6a45e in gst_pad_push (pad=0x9fe18360, pad@entry=0xac727578, buffer=buffer@entry=0x9fe18360) at gstpad.c:4174
#14 0xb7d520c2 in gst_proxy_pad_chain_default (pad=0xac730090, parent=0xac727578, buffer=0x9fe18360) at gstghostpad.c:126
#15 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9fe18360, type=<optimized out>, pad=0xac730090) at gstpad.c:3830
#16 gst_pad_push_data (pad=0xac70ac50, type=type@entry=4112, data=0x9fe18360) at gstpad.c:4063
#17 0xb7d6a45e in gst_pad_push (pad=0x9fe18360, buffer=0x9fe18360) at gstpad.c:4174
#18 0xb7807127 in gst_base_transform_chain (pad=0xac70ad88, parent=0xac75c788, buffer=0x9e9a7d48) at gstbasetransform.c:2260
#19 0xb7fb8a05 in gst_validate_pad_monitor_chain_func (pad=0xac70ad88, parent=0xac75c788, buffer=0x9e9a7d48) at gst-validate-pad-monitor.c:1509
#20 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9e9a7d48, type=<optimized out>, pad=0xac70ad88) at gstpad.c:3830
#21 gst_pad_push_data (pad=0xac70a8a8, type=type@entry=4112, data=0x9e9a7d48) at gstpad.c:4063
#22 0xb7d6a45e in gst_pad_push (pad=0x9e9a7d48, buffer=0x9e9a7d48) at gstpad.c:4174
#23 0xb7807127 in gst_base_transform_chain (pad=0xac70a770, parent=0xac735588, buffer=0x9e9a7d48) at gstbasetransform.c:2260
#24 0xb7fb8a05 in gst_validate_pad_monitor_chain_func (pad=0xac70a770, parent=0xac735588, buffer=0x9e9a7d48) at gst-validate-pad-monitor.c:1509
#25 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9e9a7d48, type=<optimized out>, pad=0xac70a770) at gstpad.c:3830
#26 gst_pad_push_data (pad=0xac75f290, type=type@entry=4112, data=0x9e9a7d48) at gstpad.c:4063
#27 0xb7d6a45e in gst_pad_push (pad=0x9e9a7d48, buffer=0x9e9a7d48) at gstpad.c:4174
#28 0xb7807127 in gst_base_transform_chain (pad=0xac75f158, parent=0x818d058, buffer=0x9e9a7d48) at gstbasetransform.c:2260
#29 0xb7fb8a05 in gst_validate_pad_monitor_chain_func (pad=0xac75f158, parent=0x818d058, buffer=0x9e9a7d48) at gst-validate-pad-monitor.c:1509
#30 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9e9a7d48, type=<optimized out>, pad=0xac75f158) at gstpad.c:3830
#31 gst_pad_push_data (pad=0xac75f020, type=type@entry=4112, data=0x9e9a7d48) at gstpad.c:4063
#32 0xb7d6a45e in gst_pad_push (pad=0x9e9a7d48, buffer=0x9e9a7d48) at gstpad.c:4174
#33 0xb7807127 in gst_base_transform_chain (pad=0xac70aec0, parent=0xac75c490, buffer=0x9e9a7d48) at gstbasetransform.c:2260
#34 0xb7fb8a05 in gst_validate_pad_monitor_chain_func (pad=0xac70aec0, parent=0xac75c490, buffer=0x9e9a7d48) at gst-validate-pad-monitor.c:1509
#35 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9e9a7d48, type=<optimized out>, pad=0xac70aec0) at gstpad.c:3830
#36 gst_pad_push_data (pad=0xadc32e78, type=type@entry=4112, data=0x9e9a7d48) at gstpad.c:4063
#37 0xb7d6a45e in gst_pad_push (pad=0x9e9a7d48, pad@entry=0xadc32e78, buffer=buffer@entry=0x9e9a7d48) at gstpad.c:4174
#38 0xb7d520c2 in gst_proxy_pad_chain_default (pad=0xac727428, parent=0x8191d10, buffer=0x9e9a7d48) at gstghostpad.c:126
#39 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9e9a7d48, type=<optimized out>, pad=0xac727428) at gstpad.c:3830
---Type <return> to continue, or q <return> to quit---
#40 gst_pad_push_data (pad=0xac70a3c8, type=type@entry=4112, data=0x9e9a7d48) at gstpad.c:4063
#41 0xb7d6a45e in gst_pad_push (pad=0x9e9a7d48, buffer=buffer@entry=0x9e9a7d48) at gstpad.c:4174
#42 0xb60dfb67 in gst_queue_push_one (queue=0xac72c090) at gstqueue.c:1169
#43 gst_queue_loop (pad=0xac70a3c8) at gstqueue.c:1298
#44 0xb7d947c9 in gst_task_func (task=task@entry=0xad19f588) at gsttask.c:316
#45 0xb7d957ef in default_func (tdata=0xac75e980, pool=0x8057470) at gsttaskpool.c:68
#46 0xb7c11405 in ?? () from /lib/i386-linux-gnu/libglib-2.0.so.0
#47 0xb7c109aa in ?? () from /lib/i386-linux-gnu/libglib-2.0.so.0
#48 0xb7b8df72 in start_thread (arg=0xa07ffb40) at pthread_create.c:312
#49 0xb7ac3f8e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:129

Thread
#0  0xb7fdd428 in __kernel_vsyscall ()
#1  0xb7ab501b in poll () at ../sysdeps/unix/syscall-template.S:81
#2  0xac6e5bbd in poll (__timeout=1500, __nfds=2, __fds=0xa6c02d68) at /usr/include/i386-linux-gnu/bits/poll2.h:46
#3  poll_func (ufds=0xa6c02d68, nfds=2, timeout=1500, userdata=0xac70b508) at pulse/thread-mainloop.c:69
#4  0xac6d44ee in pa_mainloop_poll (m=m@entry=0xac70b460) at pulse/mainloop.c:850
#5  0xac6d4d1d in pa_mainloop_iterate (m=m@entry=0xac70b460, block=block@entry=1, retval=retval@entry=0x0) at pulse/mainloop.c:932
#6  0xac6d4df4 in pa_mainloop_run (m=0xac70b460, retval=retval@entry=0x0) at pulse/mainloop.c:950
#7  0xac6e5b5c in thread (userdata=0xac705660) at pulse/thread-mainloop.c:88
#8  0xac68e4dd in internal_thread_func (userdata=0xac70b598) at pulsecore/thread-posix.c:83
#9  0xb7b8df72 in start_thread (arg=0xa753eb40) at pthread_create.c:312
#10 0xb7ac3f8e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:129

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: a/v freeze while flushing audio chain of pipeline in running state (clock and renderer sync related)

Nitin Mahajan
some updates,
the hang is blocking wait inside gst_system_clock_id_wait_jitter_unlocked(). 
the blocking calls is on gst_poll_wait ().
it seems that the audio sink thread (pulse audio) is related to this wait.
the gstreamer pipeline is full of buffers. Video rendering is in blocking wait.
The buffer carries a valid PTS value. 
what could have gone wrong by flushing the audio chain which causes video chain to block on waiting ?
thanks to share your opinion and ideas to help further debugging.
BR
Nitin

On Wed, May 31, 2017 at 5:28 PM, Nitin Mahajan <[hidden email]> wrote:
Hello folks,

I'm observing a/v freeze (gstreamer 1.4.5).
sequence of operations:
- a/v playback ongoing (pipeline state is in PLAYING)

On audio src pad of demux:
- push gst_event_new_flush_start()
- push gst_event_new_flush_stop(FALSE)
- push new segment event
- push buffers

A/V freeze (random)
the idea is to flush gstreamer pipeline for the audio chain before pushing buffers of different language of same codec.

It seems that the pipeline is blocked on basesink for video chain.
The clock ID object for video renderer suggests, gst_system_clock_id_wait_jitter_unlocked() named GstPulseSinkClock
(gdb) p *clock
$72 =

{
 object =
 {
     object = {g_type_instance = {g_class = 0xac707c58}, ref_count = 46, qdata = 0x0}, lock = {p = 0xac709f70, i = {2893062000, 0}},
     name = 0xac705520 "GstPulseSinkClock",
     parent = 0x0,
     flags = 752,
     control_bindings = 0x0,
     control_rate = 100000000,
     last_sync = 18446744073709551615,
     _gst_reserved = 0x0
 },
 priv = 0xad167828,
 _gst_reserved = {0x0, 0x0, 0x0, 0x0}
 }
 
So far not able to make out how clock management and sync with renderer is taking place.
But below two threads seem related as it semms that video renderer is polling for some event from audio renderer ?
ANy suggestion will be more that helpful

Thread
#0  0xb7fdd428 in __kernel_vsyscall ()
#1  0xb7ab5108 in __GI_ppoll (fds=0x9fe12c48, nfds=1, timeout=0xa07fe1d8, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:56
#2  0xb7d76f29 in ppoll (__ss=0x0, __timeout=<optimized out>, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/i386-linux-gnu/bits/poll2.h:77
#3  gst_poll_wait (set=0xac703c58, timeout=23516000) at gstpoll.c:1248
#4  0xb7d8d76d in gst_system_clock_id_wait_jitter_unlocked (clock=0xad167890, entry=0xac731948, jitter=0xa07fe650, restart=1) at gstsystemclock.c:684
#5  0xb7d3c1aa in gst_clock_id_wait (id=0xac731948, jitter=jitter@entry=0xa07fe650) at gstclock.c:539
#6  0xb77f1930 in gst_base_sink_wait_clock (sink=sink@entry=0xadc0cfd8, time=<optimized out>, jitter=jitter@entry=0xa07fe650) at gstbasesink.c:2104
#7  0xb77f3317 in gst_base_sink_do_sync (basesink=basesink@entry=0xadc0cfd8, obj=obj@entry=0x9fe18360, late=late@entry=0xa07fe7d8, step_end=step_end@entry=0xa07fe7dc) at gstbasesink.c:2491
#8  0xb77f484e in gst_base_sink_chain_unlocked (basesink=basesink@entry=0xadc0cfd8, obj=obj@entry=0x9fe18360, is_list=is_list@entry=0, pad=<optimized out>) at gstbasesink.c:3390
#9  0xb77f69a8 in gst_base_sink_chain_main (basesink=0xadc0cfd8, pad=<optimized out>, obj=0x9fe18360, is_list=0) at gstbasesink.c:3538
#10 0xb7fb8a05 in gst_validate_pad_monitor_chain_func (pad=0xad166500, parent=0xadc0cfd8, buffer=0x9fe18360) at gst-validate-pad-monitor.c:1509
#11 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9fe18360, type=<optimized out>, pad=0xad166500) at gstpad.c:3830
#12 gst_pad_push_data (pad=0xac727578, type=type@entry=4112, data=0x9fe18360) at gstpad.c:4063
#13 0xb7d6a45e in gst_pad_push (pad=0x9fe18360, pad@entry=0xac727578, buffer=buffer@entry=0x9fe18360) at gstpad.c:4174
#14 0xb7d520c2 in gst_proxy_pad_chain_default (pad=0xac730090, parent=0xac727578, buffer=0x9fe18360) at gstghostpad.c:126
#15 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9fe18360, type=<optimized out>, pad=0xac730090) at gstpad.c:3830
#16 gst_pad_push_data (pad=0xac70ac50, type=type@entry=4112, data=0x9fe18360) at gstpad.c:4063
#17 0xb7d6a45e in gst_pad_push (pad=0x9fe18360, buffer=0x9fe18360) at gstpad.c:4174
#18 0xb7807127 in gst_base_transform_chain (pad=0xac70ad88, parent=0xac75c788, buffer=0x9e9a7d48) at gstbasetransform.c:2260
#19 0xb7fb8a05 in gst_validate_pad_monitor_chain_func (pad=0xac70ad88, parent=0xac75c788, buffer=0x9e9a7d48) at gst-validate-pad-monitor.c:1509
#20 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9e9a7d48, type=<optimized out>, pad=0xac70ad88) at gstpad.c:3830
#21 gst_pad_push_data (pad=0xac70a8a8, type=type@entry=4112, data=0x9e9a7d48) at gstpad.c:4063
#22 0xb7d6a45e in gst_pad_push (pad=0x9e9a7d48, buffer=0x9e9a7d48) at gstpad.c:4174
#23 0xb7807127 in gst_base_transform_chain (pad=0xac70a770, parent=0xac735588, buffer=0x9e9a7d48) at gstbasetransform.c:2260
#24 0xb7fb8a05 in gst_validate_pad_monitor_chain_func (pad=0xac70a770, parent=0xac735588, buffer=0x9e9a7d48) at gst-validate-pad-monitor.c:1509
#25 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9e9a7d48, type=<optimized out>, pad=0xac70a770) at gstpad.c:3830
#26 gst_pad_push_data (pad=0xac75f290, type=type@entry=4112, data=0x9e9a7d48) at gstpad.c:4063
#27 0xb7d6a45e in gst_pad_push (pad=0x9e9a7d48, buffer=0x9e9a7d48) at gstpad.c:4174
#28 0xb7807127 in gst_base_transform_chain (pad=0xac75f158, parent=0x818d058, buffer=0x9e9a7d48) at gstbasetransform.c:2260
#29 0xb7fb8a05 in gst_validate_pad_monitor_chain_func (pad=0xac75f158, parent=0x818d058, buffer=0x9e9a7d48) at gst-validate-pad-monitor.c:1509
#30 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9e9a7d48, type=<optimized out>, pad=0xac75f158) at gstpad.c:3830
#31 gst_pad_push_data (pad=0xac75f020, type=type@entry=4112, data=0x9e9a7d48) at gstpad.c:4063
#32 0xb7d6a45e in gst_pad_push (pad=0x9e9a7d48, buffer=0x9e9a7d48) at gstpad.c:4174
#33 0xb7807127 in gst_base_transform_chain (pad=0xac70aec0, parent=0xac75c490, buffer=0x9e9a7d48) at gstbasetransform.c:2260
#34 0xb7fb8a05 in gst_validate_pad_monitor_chain_func (pad=0xac70aec0, parent=0xac75c490, buffer=0x9e9a7d48) at gst-validate-pad-monitor.c:1509
#35 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9e9a7d48, type=<optimized out>, pad=0xac70aec0) at gstpad.c:3830
#36 gst_pad_push_data (pad=0xadc32e78, type=type@entry=4112, data=0x9e9a7d48) at gstpad.c:4063
#37 0xb7d6a45e in gst_pad_push (pad=0x9e9a7d48, pad@entry=0xadc32e78, buffer=buffer@entry=0x9e9a7d48) at gstpad.c:4174
#38 0xb7d520c2 in gst_proxy_pad_chain_default (pad=0xac727428, parent=0x8191d10, buffer=0x9e9a7d48) at gstghostpad.c:126
#39 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9e9a7d48, type=<optimized out>, pad=0xac727428) at gstpad.c:3830
---Type <return> to continue, or q <return> to quit---
#40 gst_pad_push_data (pad=0xac70a3c8, type=type@entry=4112, data=0x9e9a7d48) at gstpad.c:4063
#41 0xb7d6a45e in gst_pad_push (pad=0x9e9a7d48, buffer=buffer@entry=0x9e9a7d48) at gstpad.c:4174
#42 0xb60dfb67 in gst_queue_push_one (queue=0xac72c090) at gstqueue.c:1169
#43 gst_queue_loop (pad=0xac70a3c8) at gstqueue.c:1298
#44 0xb7d947c9 in gst_task_func (task=task@entry=0xad19f588) at gsttask.c:316
#45 0xb7d957ef in default_func (tdata=0xac75e980, pool=0x8057470) at gsttaskpool.c:68
#46 0xb7c11405 in ?? () from /lib/i386-linux-gnu/libglib-2.0.so.0
#47 0xb7c109aa in ?? () from /lib/i386-linux-gnu/libglib-2.0.so.0
#48 0xb7b8df72 in start_thread (arg=0xa07ffb40) at pthread_create.c:312
#49 0xb7ac3f8e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:129

Thread
#0  0xb7fdd428 in __kernel_vsyscall ()
#1  0xb7ab501b in poll () at ../sysdeps/unix/syscall-template.S:81
#2  0xac6e5bbd in poll (__timeout=1500, __nfds=2, __fds=0xa6c02d68) at /usr/include/i386-linux-gnu/bits/poll2.h:46
#3  poll_func (ufds=0xa6c02d68, nfds=2, timeout=1500, userdata=0xac70b508) at pulse/thread-mainloop.c:69
#4  0xac6d44ee in pa_mainloop_poll (m=m@entry=0xac70b460) at pulse/mainloop.c:850
#5  0xac6d4d1d in pa_mainloop_iterate (m=m@entry=0xac70b460, block=block@entry=1, retval=retval@entry=0x0) at pulse/mainloop.c:932
#6  0xac6d4df4 in pa_mainloop_run (m=0xac70b460, retval=retval@entry=0x0) at pulse/mainloop.c:950
#7  0xac6e5b5c in thread (userdata=0xac705660) at pulse/thread-mainloop.c:88
#8  0xac68e4dd in internal_thread_func (userdata=0xac70b598) at pulsecore/thread-posix.c:83
#9  0xb7b8df72 in start_thread (arg=0xa753eb40) at pthread_create.c:312
#10 0xb7ac3f8e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:129

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel



_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Thanks & Regards
--Nitin
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: a/v freeze while flushing audio chain of pipeline in running state (clock and renderer sync related)

Nitin Mahajan
Please note that flush stop is passed with False so that the clock timing is not reset purposely as thereafter buffer with valid timestamp of different language is pushed. What could trigger queue thread to block / wait inside video renderer basesink from audio sink pulse audio. The test is running on x86 with gstreamer 1.4.5. Suggestion/pointer will be very helpful as there is not much clue from traces to establish design relationship between audio and video sinks and blocking wait.
Thanks for ur help
Best Regards

On Jun 1, 2017 10:14 AM, "Nitin Mahajan" <[hidden email]> wrote:
some updates,
the hang is blocking wait inside gst_system_clock_id_wait_jitter_unlocked(). 
the blocking calls is on gst_poll_wait ().
it seems that the audio sink thread (pulse audio) is related to this wait.
the gstreamer pipeline is full of buffers. Video rendering is in blocking wait.
The buffer carries a valid PTS value. 
what could have gone wrong by flushing the audio chain which causes video chain to block on waiting ?
thanks to share your opinion and ideas to help further debugging.
BR
Nitin

On Wed, May 31, 2017 at 5:28 PM, Nitin Mahajan <[hidden email]> wrote:
Hello folks,

I'm observing a/v freeze (gstreamer 1.4.5).
sequence of operations:
- a/v playback ongoing (pipeline state is in PLAYING)

On audio src pad of demux:
- push gst_event_new_flush_start()
- push gst_event_new_flush_stop(FALSE)
- push new segment event
- push buffers

A/V freeze (random)
the idea is to flush gstreamer pipeline for the audio chain before pushing buffers of different language of same codec.

It seems that the pipeline is blocked on basesink for video chain.
The clock ID object for video renderer suggests, gst_system_clock_id_wait_jitter_unlocked() named GstPulseSinkClock
(gdb) p *clock
$72 =

{
 object =
 {
     object = {g_type_instance = {g_class = 0xac707c58}, ref_count = 46, qdata = 0x0}, lock = {p = 0xac709f70, i = {2893062000, 0}},
     name = 0xac705520 "GstPulseSinkClock",
     parent = 0x0,
     flags = 752,
     control_bindings = 0x0,
     control_rate = 100000000,
     last_sync = 18446744073709551615,
     _gst_reserved = 0x0
 },
 priv = 0xad167828,
 _gst_reserved = {0x0, 0x0, 0x0, 0x0}
 }
 
So far not able to make out how clock management and sync with renderer is taking place.
But below two threads seem related as it semms that video renderer is polling for some event from audio renderer ?
ANy suggestion will be more that helpful

Thread
#0  0xb7fdd428 in __kernel_vsyscall ()
#1  0xb7ab5108 in __GI_ppoll (fds=0x9fe12c48, nfds=1, timeout=0xa07fe1d8, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:56
#2  0xb7d76f29 in ppoll (__ss=0x0, __timeout=<optimized out>, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/i386-linux-gnu/bits/poll2.h:77
#3  gst_poll_wait (set=0xac703c58, timeout=23516000) at gstpoll.c:1248
#4  0xb7d8d76d in gst_system_clock_id_wait_jitter_unlocked (clock=0xad167890, entry=0xac731948, jitter=0xa07fe650, restart=1) at gstsystemclock.c:684
#5  0xb7d3c1aa in gst_clock_id_wait (id=0xac731948, jitter=jitter@entry=0xa07fe650) at gstclock.c:539
#6  0xb77f1930 in gst_base_sink_wait_clock (sink=sink@entry=0xadc0cfd8, time=<optimized out>, jitter=jitter@entry=0xa07fe650) at gstbasesink.c:2104
#7  0xb77f3317 in gst_base_sink_do_sync (basesink=basesink@entry=0xadc0cfd8, obj=obj@entry=0x9fe18360, late=late@entry=0xa07fe7d8, step_end=step_end@entry=0xa07fe7dc) at gstbasesink.c:2491
#8  0xb77f484e in gst_base_sink_chain_unlocked (basesink=basesink@entry=0xadc0cfd8, obj=obj@entry=0x9fe18360, is_list=is_list@entry=0, pad=<optimized out>) at gstbasesink.c:3390
#9  0xb77f69a8 in gst_base_sink_chain_main (basesink=0xadc0cfd8, pad=<optimized out>, obj=0x9fe18360, is_list=0) at gstbasesink.c:3538
#10 0xb7fb8a05 in gst_validate_pad_monitor_chain_func (pad=0xad166500, parent=0xadc0cfd8, buffer=0x9fe18360) at gst-validate-pad-monitor.c:1509
#11 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9fe18360, type=<optimized out>, pad=0xad166500) at gstpad.c:3830
#12 gst_pad_push_data (pad=0xac727578, type=type@entry=4112, data=0x9fe18360) at gstpad.c:4063
#13 0xb7d6a45e in gst_pad_push (pad=0x9fe18360, pad@entry=0xac727578, buffer=buffer@entry=0x9fe18360) at gstpad.c:4174
#14 0xb7d520c2 in gst_proxy_pad_chain_default (pad=0xac730090, parent=0xac727578, buffer=0x9fe18360) at gstghostpad.c:126
#15 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9fe18360, type=<optimized out>, pad=0xac730090) at gstpad.c:3830
#16 gst_pad_push_data (pad=0xac70ac50, type=type@entry=4112, data=0x9fe18360) at gstpad.c:4063
#17 0xb7d6a45e in gst_pad_push (pad=0x9fe18360, buffer=0x9fe18360) at gstpad.c:4174
#18 0xb7807127 in gst_base_transform_chain (pad=0xac70ad88, parent=0xac75c788, buffer=0x9e9a7d48) at gstbasetransform.c:2260
#19 0xb7fb8a05 in gst_validate_pad_monitor_chain_func (pad=0xac70ad88, parent=0xac75c788, buffer=0x9e9a7d48) at gst-validate-pad-monitor.c:1509
#20 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9e9a7d48, type=<optimized out>, pad=0xac70ad88) at gstpad.c:3830
#21 gst_pad_push_data (pad=0xac70a8a8, type=type@entry=4112, data=0x9e9a7d48) at gstpad.c:4063
#22 0xb7d6a45e in gst_pad_push (pad=0x9e9a7d48, buffer=0x9e9a7d48) at gstpad.c:4174
#23 0xb7807127 in gst_base_transform_chain (pad=0xac70a770, parent=0xac735588, buffer=0x9e9a7d48) at gstbasetransform.c:2260
#24 0xb7fb8a05 in gst_validate_pad_monitor_chain_func (pad=0xac70a770, parent=0xac735588, buffer=0x9e9a7d48) at gst-validate-pad-monitor.c:1509
#25 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9e9a7d48, type=<optimized out>, pad=0xac70a770) at gstpad.c:3830
#26 gst_pad_push_data (pad=0xac75f290, type=type@entry=4112, data=0x9e9a7d48) at gstpad.c:4063
#27 0xb7d6a45e in gst_pad_push (pad=0x9e9a7d48, buffer=0x9e9a7d48) at gstpad.c:4174
#28 0xb7807127 in gst_base_transform_chain (pad=0xac75f158, parent=0x818d058, buffer=0x9e9a7d48) at gstbasetransform.c:2260
#29 0xb7fb8a05 in gst_validate_pad_monitor_chain_func (pad=0xac75f158, parent=0x818d058, buffer=0x9e9a7d48) at gst-validate-pad-monitor.c:1509
#30 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9e9a7d48, type=<optimized out>, pad=0xac75f158) at gstpad.c:3830
#31 gst_pad_push_data (pad=0xac75f020, type=type@entry=4112, data=0x9e9a7d48) at gstpad.c:4063
#32 0xb7d6a45e in gst_pad_push (pad=0x9e9a7d48, buffer=0x9e9a7d48) at gstpad.c:4174
#33 0xb7807127 in gst_base_transform_chain (pad=0xac70aec0, parent=0xac75c490, buffer=0x9e9a7d48) at gstbasetransform.c:2260
#34 0xb7fb8a05 in gst_validate_pad_monitor_chain_func (pad=0xac70aec0, parent=0xac75c490, buffer=0x9e9a7d48) at gst-validate-pad-monitor.c:1509
#35 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9e9a7d48, type=<optimized out>, pad=0xac70aec0) at gstpad.c:3830
#36 gst_pad_push_data (pad=0xadc32e78, type=type@entry=4112, data=0x9e9a7d48) at gstpad.c:4063
#37 0xb7d6a45e in gst_pad_push (pad=0x9e9a7d48, pad@entry=0xadc32e78, buffer=buffer@entry=0x9e9a7d48) at gstpad.c:4174
#38 0xb7d520c2 in gst_proxy_pad_chain_default (pad=0xac727428, parent=0x8191d10, buffer=0x9e9a7d48) at gstghostpad.c:126
#39 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9e9a7d48, type=<optimized out>, pad=0xac727428) at gstpad.c:3830
---Type <return> to continue, or q <return> to quit---
#40 gst_pad_push_data (pad=0xac70a3c8, type=type@entry=4112, data=0x9e9a7d48) at gstpad.c:4063
#41 0xb7d6a45e in gst_pad_push (pad=0x9e9a7d48, buffer=buffer@entry=0x9e9a7d48) at gstpad.c:4174
#42 0xb60dfb67 in gst_queue_push_one (queue=0xac72c090) at gstqueue.c:1169
#43 gst_queue_loop (pad=0xac70a3c8) at gstqueue.c:1298
#44 0xb7d947c9 in gst_task_func (task=task@entry=0xad19f588) at gsttask.c:316
#45 0xb7d957ef in default_func (tdata=0xac75e980, pool=0x8057470) at gsttaskpool.c:68
#46 0xb7c11405 in ?? () from /lib/i386-linux-gnu/libglib-2.0.so.0
#47 0xb7c109aa in ?? () from /lib/i386-linux-gnu/libglib-2.0.so.0
#48 0xb7b8df72 in start_thread (arg=0xa07ffb40) at pthread_create.c:312
#49 0xb7ac3f8e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:129

Thread
#0  0xb7fdd428 in __kernel_vsyscall ()
#1  0xb7ab501b in poll () at ../sysdeps/unix/syscall-template.S:81
#2  0xac6e5bbd in poll (__timeout=1500, __nfds=2, __fds=0xa6c02d68) at /usr/include/i386-linux-gnu/bits/poll2.h:46
#3  poll_func (ufds=0xa6c02d68, nfds=2, timeout=1500, userdata=0xac70b508) at pulse/thread-mainloop.c:69
#4  0xac6d44ee in pa_mainloop_poll (m=m@entry=0xac70b460) at pulse/mainloop.c:850
#5  0xac6d4d1d in pa_mainloop_iterate (m=m@entry=0xac70b460, block=block@entry=1, retval=retval@entry=0x0) at pulse/mainloop.c:932
#6  0xac6d4df4 in pa_mainloop_run (m=0xac70b460, retval=retval@entry=0x0) at pulse/mainloop.c:950
#7  0xac6e5b5c in thread (userdata=0xac705660) at pulse/thread-mainloop.c:88
#8  0xac68e4dd in internal_thread_func (userdata=0xac70b598) at pulsecore/thread-posix.c:83
#9  0xb7b8df72 in start_thread (arg=0xa753eb40) at pthread_create.c:312
#10 0xb7ac3f8e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:129

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel



_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Thanks & Regards
--Nitin
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: a/v freeze while flushing audio chain of pipeline in running state (clock and renderer sync related)

Nitin Mahajan
Hello experts,
Kindly suggest what could be the possible scenario in which gst_system_clock_id_wait_jitter_unlocked gets blocked in basesink.
Thanks
Nitin

On Jun 1, 2017 10:40 PM, "Nitin Mahajan" <[hidden email]> wrote:
Please note that flush stop is passed with False so that the clock timing is not reset purposely as thereafter buffer with valid timestamp of different language is pushed. What could trigger queue thread to block / wait inside video renderer basesink from audio sink pulse audio. The test is running on x86 with gstreamer 1.4.5. Suggestion/pointer will be very helpful as there is not much clue from traces to establish design relationship between audio and video sinks and blocking wait.
Thanks for ur help
Best Regards

On Jun 1, 2017 10:14 AM, "Nitin Mahajan" <[hidden email]> wrote:
some updates,
the hang is blocking wait inside gst_system_clock_id_wait_jitter_unlocked(). 
the blocking calls is on gst_poll_wait ().
it seems that the audio sink thread (pulse audio) is related to this wait.
the gstreamer pipeline is full of buffers. Video rendering is in blocking wait.
The buffer carries a valid PTS value. 
what could have gone wrong by flushing the audio chain which causes video chain to block on waiting ?
thanks to share your opinion and ideas to help further debugging.
BR
Nitin

On Wed, May 31, 2017 at 5:28 PM, Nitin Mahajan <[hidden email]> wrote:
Hello folks,

I'm observing a/v freeze (gstreamer 1.4.5).
sequence of operations:
- a/v playback ongoing (pipeline state is in PLAYING)

On audio src pad of demux:
- push gst_event_new_flush_start()
- push gst_event_new_flush_stop(FALSE)
- push new segment event
- push buffers

A/V freeze (random)
the idea is to flush gstreamer pipeline for the audio chain before pushing buffers of different language of same codec.

It seems that the pipeline is blocked on basesink for video chain.
The clock ID object for video renderer suggests, gst_system_clock_id_wait_jitter_unlocked() named GstPulseSinkClock
(gdb) p *clock
$72 =

{
 object =
 {
     object = {g_type_instance = {g_class = 0xac707c58}, ref_count = 46, qdata = 0x0}, lock = {p = 0xac709f70, i = {2893062000, 0}},
     name = 0xac705520 "GstPulseSinkClock",
     parent = 0x0,
     flags = 752,
     control_bindings = 0x0,
     control_rate = 100000000,
     last_sync = 18446744073709551615,
     _gst_reserved = 0x0
 },
 priv = 0xad167828,
 _gst_reserved = {0x0, 0x0, 0x0, 0x0}
 }
 
So far not able to make out how clock management and sync with renderer is taking place.
But below two threads seem related as it semms that video renderer is polling for some event from audio renderer ?
ANy suggestion will be more that helpful

Thread
#0  0xb7fdd428 in __kernel_vsyscall ()
#1  0xb7ab5108 in __GI_ppoll (fds=0x9fe12c48, nfds=1, timeout=0xa07fe1d8, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:56
#2  0xb7d76f29 in ppoll (__ss=0x0, __timeout=<optimized out>, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/i386-linux-gnu/bits/poll2.h:77
#3  gst_poll_wait (set=0xac703c58, timeout=23516000) at gstpoll.c:1248
#4  0xb7d8d76d in gst_system_clock_id_wait_jitter_unlocked (clock=0xad167890, entry=0xac731948, jitter=0xa07fe650, restart=1) at gstsystemclock.c:684
#5  0xb7d3c1aa in gst_clock_id_wait (id=0xac731948, jitter=jitter@entry=0xa07fe650) at gstclock.c:539
#6  0xb77f1930 in gst_base_sink_wait_clock (sink=sink@entry=0xadc0cfd8, time=<optimized out>, jitter=jitter@entry=0xa07fe650) at gstbasesink.c:2104
#7  0xb77f3317 in gst_base_sink_do_sync (basesink=basesink@entry=0xadc0cfd8, obj=obj@entry=0x9fe18360, late=late@entry=0xa07fe7d8, step_end=step_end@entry=0xa07fe7dc) at gstbasesink.c:2491
#8  0xb77f484e in gst_base_sink_chain_unlocked (basesink=basesink@entry=0xadc0cfd8, obj=obj@entry=0x9fe18360, is_list=is_list@entry=0, pad=<optimized out>) at gstbasesink.c:3390
#9  0xb77f69a8 in gst_base_sink_chain_main (basesink=0xadc0cfd8, pad=<optimized out>, obj=0x9fe18360, is_list=0) at gstbasesink.c:3538
#10 0xb7fb8a05 in gst_validate_pad_monitor_chain_func (pad=0xad166500, parent=0xadc0cfd8, buffer=0x9fe18360) at gst-validate-pad-monitor.c:1509
#11 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9fe18360, type=<optimized out>, pad=0xad166500) at gstpad.c:3830
#12 gst_pad_push_data (pad=0xac727578, type=type@entry=4112, data=0x9fe18360) at gstpad.c:4063
#13 0xb7d6a45e in gst_pad_push (pad=0x9fe18360, pad@entry=0xac727578, buffer=buffer@entry=0x9fe18360) at gstpad.c:4174
#14 0xb7d520c2 in gst_proxy_pad_chain_default (pad=0xac730090, parent=0xac727578, buffer=0x9fe18360) at gstghostpad.c:126
#15 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9fe18360, type=<optimized out>, pad=0xac730090) at gstpad.c:3830
#16 gst_pad_push_data (pad=0xac70ac50, type=type@entry=4112, data=0x9fe18360) at gstpad.c:4063
#17 0xb7d6a45e in gst_pad_push (pad=0x9fe18360, buffer=0x9fe18360) at gstpad.c:4174
#18 0xb7807127 in gst_base_transform_chain (pad=0xac70ad88, parent=0xac75c788, buffer=0x9e9a7d48) at gstbasetransform.c:2260
#19 0xb7fb8a05 in gst_validate_pad_monitor_chain_func (pad=0xac70ad88, parent=0xac75c788, buffer=0x9e9a7d48) at gst-validate-pad-monitor.c:1509
#20 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9e9a7d48, type=<optimized out>, pad=0xac70ad88) at gstpad.c:3830
#21 gst_pad_push_data (pad=0xac70a8a8, type=type@entry=4112, data=0x9e9a7d48) at gstpad.c:4063
#22 0xb7d6a45e in gst_pad_push (pad=0x9e9a7d48, buffer=0x9e9a7d48) at gstpad.c:4174
#23 0xb7807127 in gst_base_transform_chain (pad=0xac70a770, parent=0xac735588, buffer=0x9e9a7d48) at gstbasetransform.c:2260
#24 0xb7fb8a05 in gst_validate_pad_monitor_chain_func (pad=0xac70a770, parent=0xac735588, buffer=0x9e9a7d48) at gst-validate-pad-monitor.c:1509
#25 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9e9a7d48, type=<optimized out>, pad=0xac70a770) at gstpad.c:3830
#26 gst_pad_push_data (pad=0xac75f290, type=type@entry=4112, data=0x9e9a7d48) at gstpad.c:4063
#27 0xb7d6a45e in gst_pad_push (pad=0x9e9a7d48, buffer=0x9e9a7d48) at gstpad.c:4174
#28 0xb7807127 in gst_base_transform_chain (pad=0xac75f158, parent=0x818d058, buffer=0x9e9a7d48) at gstbasetransform.c:2260
#29 0xb7fb8a05 in gst_validate_pad_monitor_chain_func (pad=0xac75f158, parent=0x818d058, buffer=0x9e9a7d48) at gst-validate-pad-monitor.c:1509
#30 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9e9a7d48, type=<optimized out>, pad=0xac75f158) at gstpad.c:3830
#31 gst_pad_push_data (pad=0xac75f020, type=type@entry=4112, data=0x9e9a7d48) at gstpad.c:4063
#32 0xb7d6a45e in gst_pad_push (pad=0x9e9a7d48, buffer=0x9e9a7d48) at gstpad.c:4174
#33 0xb7807127 in gst_base_transform_chain (pad=0xac70aec0, parent=0xac75c490, buffer=0x9e9a7d48) at gstbasetransform.c:2260
#34 0xb7fb8a05 in gst_validate_pad_monitor_chain_func (pad=0xac70aec0, parent=0xac75c490, buffer=0x9e9a7d48) at gst-validate-pad-monitor.c:1509
#35 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9e9a7d48, type=<optimized out>, pad=0xac70aec0) at gstpad.c:3830
#36 gst_pad_push_data (pad=0xadc32e78, type=type@entry=4112, data=0x9e9a7d48) at gstpad.c:4063
#37 0xb7d6a45e in gst_pad_push (pad=0x9e9a7d48, pad@entry=0xadc32e78, buffer=buffer@entry=0x9e9a7d48) at gstpad.c:4174
#38 0xb7d520c2 in gst_proxy_pad_chain_default (pad=0xac727428, parent=0x8191d10, buffer=0x9e9a7d48) at gstghostpad.c:126
#39 0xb7d62c3a in gst_pad_chain_data_unchecked (data=0x9e9a7d48, type=<optimized out>, pad=0xac727428) at gstpad.c:3830
---Type <return> to continue, or q <return> to quit---
#40 gst_pad_push_data (pad=0xac70a3c8, type=type@entry=4112, data=0x9e9a7d48) at gstpad.c:4063
#41 0xb7d6a45e in gst_pad_push (pad=0x9e9a7d48, buffer=buffer@entry=0x9e9a7d48) at gstpad.c:4174
#42 0xb60dfb67 in gst_queue_push_one (queue=0xac72c090) at gstqueue.c:1169
#43 gst_queue_loop (pad=0xac70a3c8) at gstqueue.c:1298
#44 0xb7d947c9 in gst_task_func (task=task@entry=0xad19f588) at gsttask.c:316
#45 0xb7d957ef in default_func (tdata=0xac75e980, pool=0x8057470) at gsttaskpool.c:68
#46 0xb7c11405 in ?? () from /lib/i386-linux-gnu/libglib-2.0.so.0
#47 0xb7c109aa in ?? () from /lib/i386-linux-gnu/libglib-2.0.so.0
#48 0xb7b8df72 in start_thread (arg=0xa07ffb40) at pthread_create.c:312
#49 0xb7ac3f8e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:129

Thread
#0  0xb7fdd428 in __kernel_vsyscall ()
#1  0xb7ab501b in poll () at ../sysdeps/unix/syscall-template.S:81
#2  0xac6e5bbd in poll (__timeout=1500, __nfds=2, __fds=0xa6c02d68) at /usr/include/i386-linux-gnu/bits/poll2.h:46
#3  poll_func (ufds=0xa6c02d68, nfds=2, timeout=1500, userdata=0xac70b508) at pulse/thread-mainloop.c:69
#4  0xac6d44ee in pa_mainloop_poll (m=m@entry=0xac70b460) at pulse/mainloop.c:850
#5  0xac6d4d1d in pa_mainloop_iterate (m=m@entry=0xac70b460, block=block@entry=1, retval=retval@entry=0x0) at pulse/mainloop.c:932
#6  0xac6d4df4 in pa_mainloop_run (m=0xac70b460, retval=retval@entry=0x0) at pulse/mainloop.c:950
#7  0xac6e5b5c in thread (userdata=0xac705660) at pulse/thread-mainloop.c:88
#8  0xac68e4dd in internal_thread_func (userdata=0xac70b598) at pulsecore/thread-posix.c:83
#9  0xb7b8df72 in start_thread (arg=0xa753eb40) at pthread_create.c:312
#10 0xb7ac3f8e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:129

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel



_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Thanks & Regards
--Nitin
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: a/v freeze while flushing audio chain of pipeline in running state (clock and renderer sync related)

Arun Raghavan-4
In reply to this post by Nitin Mahajan
On Thu, 1 Jun 2017, at 10:40 PM, Nitin Mahajan wrote:
> Please note that flush stop is passed with False so that the clock timing
> is not reset purposely as thereafter buffer with valid timestamp of
> different language is pushed. What could trigger queue thread to block /
> wait inside video renderer basesink from audio sink pulse audio. The test
> is running on x86 with gstreamer 1.4.5. Suggestion/pointer will be very
> helpful as there is not much clue from traces to establish design
> relationship between audio and video sinks and blocking wait.

It might be a good idea to look at the basesink debug output -- what is
the timestamp that you're pushing out, and what is the timestamp against
which that the sink is waiting?

-- Arun
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: a/v freeze while flushing audio chain of pipeline in running state (clock and renderer sync related)

Nitin Mahajan
Many Thanks. Attached basesink debug out captured when a/v freeze (freeze_basesink_traces.txt.zip)
Also captured some info for the blocking thread below.
While analysing various threads, it seems that audio queue is empty post flushing the audio chain (upon track change) and the video queue is full.
The thread to push next video packet is blocked on multiqueue waiting for the space to be available to enqueue but video renderer is blocked.
Due to this, complete video pipeline is full and buffers not being consumed and the thread responsible to push audio packets from the demux in pipeline is blocked.
Could it be the possible reason that audio queue is empty post flush and video queue is full, so basesink will wait forever triggering a deadlock ..
Also, may request for ant reference implementation where on the fly flush start and stop is done on one of the decode chain (like audio decode chain in this case)..

(gdb) up
#4  0xb7d8d76d in gst_system_clock_id_wait_jitter_unlocked (clock=0xad1672e0, entry=0xac732140, jitter=0xa09fe650, restart=1) at gstsystemclock.c:684
684          pollret = gst_poll_wait (sysclock->priv->timer, diff);
(gdb) p diff
$1 = 16433000
(gdb) up
#5  0xb7d3c1aa in gst_clock_id_wait (id=0xac732140, jitter=jitter@entry=0xa09fe650) at gstclock.c:539
539      res = cclass->wait (clock, entry, jitter);
(gdb) p jitter
$2 = (GstClockTimeDiff *) 0xa09fe650
(gdb) p *jitter
$3 = -16433000
(gdb) p *clock
$4 = {object = {object = {g_type_instance = {g_class = 0xac708098}, ref_count = 46, qdata = 0x0}, lock = {p = 0xac705a28, i = {2893044264, 0}}, name = 0xac705a68 "GstPulseSinkClock", parent = 0x0, 
    flags = 752, control_bindings = 0x0, control_rate = 100000000, last_sync = 18446744073709551615, _gst_reserved = 0x0}, priv = 0xad167278, _gst_reserved = {0x0, 0x0, 0x0, 0x0}}
(gdb) p *entry
$6 = {refcount = 1, clock = 0xad1672e0, type = GST_CLOCK_ENTRY_SINGLE, time = 25083000000, interval = 18446744073709551615, status = GST_CLOCK_BUSY, func = 0x0, user_data = 0x0, destroy_data = 0x0, 
  unscheduled = 0, woken_up = 0, _gst_reserved = {0x0, 0x0, 0x0, 0x0}}
(gdb) 
$7 = {refcount = 1, clock = 0xad1672e0, type = GST_CLOCK_ENTRY_SINGLE, time = 25083000000, interval = 18446744073709551615, status = GST_CLOCK_BUSY, func = 0x0, user_data = 0x0, destroy_data = 0x0, 
  unscheduled = 0, woken_up = 0, _gst_reserved = {0x0, 0x0, 0x0, 0x0}}
(gdb) up
#6  0xb77f1930 in gst_base_sink_wait_clock (sink=sink@entry=0xad30c6d8, time=<optimized out>, jitter=jitter@entry=0xa09fe650) at gstbasesink.c:2104
2104      ret = gst_clock_id_wait (sink->priv->cached_clock_id, jitter);
(gdb) p jitter
$8 = (GstClockTimeDiff *) 0xa09fe650
(gdb) p *jitter
$9 = -16433000
(gdb) up
#7  0xb77f3317 in gst_base_sink_do_sync (basesink=basesink@entry=0xad30c6d8, obj=obj@entry=0xa0018220, late=late@entry=0xa09fe7d8, step_end=step_end@entry=0xa09fe7dc) at gstbasesink.c:2491
2491      status = gst_base_sink_wait_clock (basesink, stime, &jitter);
(gdb) p stime
$10 = <optimized out>
(gdb) p jitter
$11 = -16433000
(gdb) p *basesink
$12 = {element = {object = {object = {g_type_instance = {g_class = 0xad30b220}, ref_count = 4, qdata = 0xac7763b8}, lock = {p = 0xad30c8b8, i = {2905655480, 0}}, name = 0xad32f5f0 "irdetovideosink0", 
      parent = 0xac72b048, flags = 32, control_bindings = 0x0, control_rate = 100000000, last_sync = 18446744073709551615, _gst_reserved = 0x0}, state_lock = {p = 0xad30c8d8, i = {0, 0}}, state_cond = {
      p = 0xad30c8f8, i = {0, 0}}, state_cookie = 3, target_state = GST_STATE_PLAYING, current_state = GST_STATE_PLAYING, next_state = GST_STATE_VOID_PENDING, pending_state = GST_STATE_VOID_PENDING, 
    last_return = GST_STATE_CHANGE_SUCCESS, bus = 0xac706298, clock = 0xad1672e0, base_time = 0, start_time = 0, numpads = 1, pads = 0xadd5c5c0, numsrcpads = 0, srcpads = 0x0, numsinkpads = 1, 
    sinkpads = 0xadd5c5d0, pads_cookie = 1, _gst_reserved = {0x0, 0x0, 0x0, 0x0}}, sinkpad = 0xad1664f8, pad_mode = GST_PAD_MODE_PUSH, offset = 0, can_activate_pull = 0, can_activate_push = 1, 
  preroll_lock = {p = 0xad30ca78, i = {2905655928, 0}}, preroll_cond = {p = 0xad30ca98, i = {0, 0}}, eos = 0, need_preroll = 0, have_preroll = 0, playing_async = 0, have_newsegment = 1, segment = {
    flags = GST_SEGMENT_FLAG_NONE, rate = 1, applied_rate = 1, format = GST_FORMAT_TIME, base = 0, offset = 0, start = 0, stop = 18446744073709551615, time = 0, position = 25124666592, 
    duration = 18446744073709551615, _gst_reserved = {0x0, 0x0, 0x0, 0x0}}, clock_id = 0xac732140, sync = 1, flushing = 0, running = 0, max_lateness = 20000000, priv = 0xad30c500, _gst_reserved = {
    0x0 <repeats 20 times>}}
(gdb) 

--Nitin


On Thu, Jun 8, 2017 at 12:43 PM, Arun Raghavan <[hidden email]> wrote:
On Thu, 1 Jun 2017, at 10:40 PM, Nitin Mahajan wrote:
> Please note that flush stop is passed with False so that the clock timing
> is not reset purposely as thereafter buffer with valid timestamp of
> different language is pushed. What could trigger queue thread to block /
> wait inside video renderer basesink from audio sink pulse audio. The test
> is running on x86 with gstreamer 1.4.5. Suggestion/pointer will be very
> helpful as there is not much clue from traces to establish design
> relationship between audio and video sinks and blocking wait.

It might be a good idea to look at the basesink debug output -- what is
the timestamp that you're pushing out, and what is the timestamp against
which that the sink is waiting?

-- Arun
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

freeze_basesink_traces.txt.zip (394K) Download Attachment
Thanks & Regards
--Nitin
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: a/v freeze while flushing audio chain of pipeline in running state (clock and renderer sync related)

Nitin Mahajan
Missed to mention that the logs are captured with gst-validate-1.0 application. 
The scenario is switching track switch every 5seconds. 
the adaptive demux plugin outputs video and audio ES.
The sequence of operations done on adaptive demux plugin are as follows:
On audio src pad of demux:
Flush buffered audio packets:
- push gst_event_new_flush_start()
- push gst_event_new_flush_stop(FALSE)
- push new segment event 
- push buffers with the time stamps corresponding to the current presentation time
It works 2-3 time but then freezes.
--Nitin

On Thu, Jun 8, 2017 at 2:13 PM, Nitin Mahajan <[hidden email]> wrote:
Many Thanks. Attached basesink debug out captured when a/v freeze (freeze_basesink_traces.txt.zip)
Also captured some info for the blocking thread below.
While analysing various threads, it seems that audio queue is empty post flushing the audio chain (upon track change) and the video queue is full.
The thread to push next video packet is blocked on multiqueue waiting for the space to be available to enqueue but video renderer is blocked.
Due to this, complete video pipeline is full and buffers not being consumed and the thread responsible to push audio packets from the demux in pipeline is blocked.
Could it be the possible reason that audio queue is empty post flush and video queue is full, so basesink will wait forever triggering a deadlock ..
Also, may request for ant reference implementation where on the fly flush start and stop is done on one of the decode chain (like audio decode chain in this case)..

(gdb) up
#4  0xb7d8d76d in gst_system_clock_id_wait_jitter_unlocked (clock=0xad1672e0, entry=0xac732140, jitter=0xa09fe650, restart=1) at gstsystemclock.c:684
684          pollret = gst_poll_wait (sysclock->priv->timer, diff);
(gdb) p diff
$1 = 16433000
(gdb) up
#5  0xb7d3c1aa in gst_clock_id_wait (id=0xac732140, jitter=jitter@entry=0xa09fe650) at gstclock.c:539
539      res = cclass->wait (clock, entry, jitter);
(gdb) p jitter
$2 = (GstClockTimeDiff *) 0xa09fe650
(gdb) p *jitter
$3 = -16433000
(gdb) p *clock
$4 = {object = {object = {g_type_instance = {g_class = 0xac708098}, ref_count = 46, qdata = 0x0}, lock = {p = 0xac705a28, i = {2893044264, 0}}, name = 0xac705a68 "GstPulseSinkClock", parent = 0x0, 
    flags = 752, control_bindings = 0x0, control_rate = 100000000, last_sync = 18446744073709551615, _gst_reserved = 0x0}, priv = 0xad167278, _gst_reserved = {0x0, 0x0, 0x0, 0x0}}
(gdb) p *entry
$6 = {refcount = 1, clock = 0xad1672e0, type = GST_CLOCK_ENTRY_SINGLE, time = 25083000000, interval = 18446744073709551615, status = GST_CLOCK_BUSY, func = 0x0, user_data = 0x0, destroy_data = 0x0, 
  unscheduled = 0, woken_up = 0, _gst_reserved = {0x0, 0x0, 0x0, 0x0}}
(gdb) 
$7 = {refcount = 1, clock = 0xad1672e0, type = GST_CLOCK_ENTRY_SINGLE, time = 25083000000, interval = 18446744073709551615, status = GST_CLOCK_BUSY, func = 0x0, user_data = 0x0, destroy_data = 0x0, 
  unscheduled = 0, woken_up = 0, _gst_reserved = {0x0, 0x0, 0x0, 0x0}}
(gdb) up
#6  0xb77f1930 in gst_base_sink_wait_clock (sink=sink@entry=0xad30c6d8, time=<optimized out>, jitter=jitter@entry=0xa09fe650) at gstbasesink.c:2104
2104      ret = gst_clock_id_wait (sink->priv->cached_clock_id, jitter);
(gdb) p jitter
$8 = (GstClockTimeDiff *) 0xa09fe650
(gdb) p *jitter
$9 = -16433000
(gdb) up
#7  0xb77f3317 in gst_base_sink_do_sync (basesink=basesink@entry=0xad30c6d8, obj=obj@entry=0xa0018220, late=late@entry=0xa09fe7d8, step_end=step_end@entry=0xa09fe7dc) at gstbasesink.c:2491
2491      status = gst_base_sink_wait_clock (basesink, stime, &jitter);
(gdb) p stime
$10 = <optimized out>
(gdb) p jitter
$11 = -16433000
(gdb) p *basesink
$12 = {element = {object = {object = {g_type_instance = {g_class = 0xad30b220}, ref_count = 4, qdata = 0xac7763b8}, lock = {p = 0xad30c8b8, i = {2905655480, 0}}, name = 0xad32f5f0 "irdetovideosink0", 
      parent = 0xac72b048, flags = 32, control_bindings = 0x0, control_rate = 100000000, last_sync = 18446744073709551615, _gst_reserved = 0x0}, state_lock = {p = 0xad30c8d8, i = {0, 0}}, state_cond = {
      p = 0xad30c8f8, i = {0, 0}}, state_cookie = 3, target_state = GST_STATE_PLAYING, current_state = GST_STATE_PLAYING, next_state = GST_STATE_VOID_PENDING, pending_state = GST_STATE_VOID_PENDING, 
    last_return = GST_STATE_CHANGE_SUCCESS, bus = 0xac706298, clock = 0xad1672e0, base_time = 0, start_time = 0, numpads = 1, pads = 0xadd5c5c0, numsrcpads = 0, srcpads = 0x0, numsinkpads = 1, 
    sinkpads = 0xadd5c5d0, pads_cookie = 1, _gst_reserved = {0x0, 0x0, 0x0, 0x0}}, sinkpad = 0xad1664f8, pad_mode = GST_PAD_MODE_PUSH, offset = 0, can_activate_pull = 0, can_activate_push = 1, 
  preroll_lock = {p = 0xad30ca78, i = {2905655928, 0}}, preroll_cond = {p = 0xad30ca98, i = {0, 0}}, eos = 0, need_preroll = 0, have_preroll = 0, playing_async = 0, have_newsegment = 1, segment = {
    flags = GST_SEGMENT_FLAG_NONE, rate = 1, applied_rate = 1, format = GST_FORMAT_TIME, base = 0, offset = 0, start = 0, stop = 18446744073709551615, time = 0, position = 25124666592, 
    duration = 18446744073709551615, _gst_reserved = {0x0, 0x0, 0x0, 0x0}}, clock_id = 0xac732140, sync = 1, flushing = 0, running = 0, max_lateness = 20000000, priv = 0xad30c500, _gst_reserved = {
    0x0 <repeats 20 times>}}
(gdb) 

--Nitin


On Thu, Jun 8, 2017 at 12:43 PM, Arun Raghavan <[hidden email]> wrote:
On Thu, 1 Jun 2017, at 10:40 PM, Nitin Mahajan wrote:
> Please note that flush stop is passed with False so that the clock timing
> is not reset purposely as thereafter buffer with valid timestamp of
> different language is pushed. What could trigger queue thread to block /
> wait inside video renderer basesink from audio sink pulse audio. The test
> is running on x86 with gstreamer 1.4.5. Suggestion/pointer will be very
> helpful as there is not much clue from traces to establish design
> relationship between audio and video sinks and blocking wait.

It might be a good idea to look at the basesink debug output -- what is
the timestamp that you're pushing out, and what is the timestamp against
which that the sink is waiting?

-- Arun
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel



_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Thanks & Regards
--Nitin
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: a/v freeze while flushing audio chain of pipeline in running state (clock and renderer sync related)

Enrique Ocaña González
El jueves, 8 de junio de 2017 14:18:51 (CEST) Nitin Mahajan escribió:

> The sequence of operations done on adaptive demux plugin are as follows:
> On audio src pad of demux:
> Flush buffered audio packets:
> - push gst_event_new_flush_start()
> - push gst_event_new_flush_stop(FALSE)
> - push new segment event
> - push buffers with the time stamps corresponding to the current
> presentation time

First of all, sorry if my comment isn't completely related to the problem you
describe. Feel free to ignore it if it only adds noise to the discussion. I
think it might be relevant, though.

I'm having a similar problem in our use case: Two appsrcs pushing already
demuxed buffers (video/x-h264 and audio/mpeg) and the pipeline filled of
buffers. Then our app receives an update of some of the audio buffers (eg:
higher/lower quality) and needs to replace the buffers already in the pipeline
(playbin) in a seamless way (keeping the streamsynchronizer happy and the
video stream unaware of the replacement in the audio stream).

We do something similar to what you describe:

- We ask the pipeline for the current position (the PTS of the buffer being
rendered)
- We send flush start and stop (false) events
- Use gst_base_src_new_seamless_segment() to send a new [position, old_stop]
  segment (to convince appsrc to schedule the new buffers properly)
- Fix the segment emitted by appsrc using a probe, so it's again [position,
  old_stop] and clear flags (not reset-time).

The new replacement buffers start to fill the audio stream of the pipeline
again, the faad audio decoder does fine (thanks to the flush), and the audio
buffers reach the audio sink (GstOMXHdmiAudioSink). So far so good.

Then the problems start. For some reason, audiobasesink doesn't reconfigure
its audioringbuffer properly. Normally, I have a diff of ~40 between segdone
(the write position in the audioringbuffer) and readseg (the read position of
the data that goes to the actual audio device). That matches with segtotal
(the size of the audioringbuffer). However, after the flushes and new segment
the diff is ~15000 (clearly wrong, much higher than segtotal). This causes
gst_audio_ring_buffer_commit() (implemented by default_commit) to wait until
the diff is again less than segtotal. It waits forever.

This long audio sink wait stalls the audio stream and we start to lose audio.

If you want to know if you're having the same problem as we have, enable these
logs:

 GST_DEBUG='basesink:TRACE,ringbuffer:TRACE,audiosink:TRACE,\
           audiobasesink:TRACE,audioclock:DEBUG'

and grep for this kind of messages:

 cat /tmp/log.txt | grep default_commit | grep 'diff'

You should see the diff jump (from ~40 to ~15000) like this:

0:00:25.508332021  1277 0x72e0d4f0 DEBUG             ringbuffer
gstaudioringbuffer.c:1525:default_commit:<audiosinkringbuffer0> pointer at
1033, write to 1072-1340, diff 39, segtotal 40, segsize 1764, base 0
0:00:25.508480042  1277 0x72e0d4f0 DEBUG             ringbuffer
gstaudioringbuffer.c:1525:default_commit:<audiosinkringbuffer0> pointer at
1033, write to 1073-0, diff 40, segtotal 40, segsize 1764, base 0
0:00:25.615070250  1277 0x72e0d4f0 DEBUG             ringbuffer
gstaudioringbuffer.c:1525:default_commit:<audiosinkringbuffer0> pointer at
1034, write to 16497-1204, diff 15463, segtotal 40, segsize 1764, base 0
0:00:25.615523479  1277 0x72e0d4f0 DEBUG             ringbuffer
gstaudioringbuffer.c:1525:default_commit:<audiosinkringbuffer0> pointer at
1035, write to 16497-1204, diff 15462, segtotal 40, segsize 1764, base 0

I haven't found a solution yet and by now we're just applying a workaround on
the app side to avoid the need of replacing already buffered audio data.

Cheers.

--
Enrique Ocaña González
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: a/v freeze while flushing audio chain of pipeline in running state (clock and renderer sync related)

Nitin Mahajan
In reply to this post by Nitin Mahajan
it's blocked on ret = gst_clock_id_wait (sink->priv->cached_clock_id, jitter);
  /* A blocking wait is performed on the clock. We save the ClockID
   * so we can unlock the entry at any time. While we are blocking, we
   * release the PREROLL_LOCK so that other threads can interrupt the
   * entry. */
...
res = cclass->wait (clock, entry, jitter);
...

what could cause the GstSystemClock to wait forever inside:
      /* now wait on the entry, it either times out or the fd is written. The
       * status of the entry is only BUSY around the poll. */
      pollret = gst_poll_wait (sysclock->priv->timer, diff);

This part is not clear especially clocks and synchronisation management.
Note that the clock ID displays name as "GstPulseSinkClock". This is not clear.
Is it like Gstreamer system clock is linked with PulseSink ?

We also noticed earlier that there is one thread 
Thread 
#0  0xb7fdd428 in __kernel_vsyscall ()
#1  0xb7ab501b in poll () at ../sysdeps/unix/syscall-template.S:81
#2  0xac6e5bbd in poll (__timeout=1500, __nfds=2, __fds=0xa6c02d68) at /usr/include/i386-linux-gnu/bits/poll2.h:46
#3  poll_func (ufds=0xa6c02d68, nfds=2, timeout=1500, userdata=0xac70b508) at pulse/thread-mainloop.c:69
#4  0xac6d44ee in pa_mainloop_poll (m=m@entry=0xac70b460) at pulse/mainloop.c:850
#5  0xac6d4d1d in pa_mainloop_iterate (m=m@entry=0xac70b460, block=block@entry=1, retval=retval@entry=0x0) at pulse/mainloop.c:932
#6  0xac6d4df4 in pa_mainloop_run (m=0xac70b460, retval=retval@entry=0x0) at pulse/mainloop.c:950
#7  0xac6e5b5c in thread (userdata=0xac705660) at pulse/thread-mainloop.c:88
#8  0xac68e4dd in internal_thread_func (userdata=0xac70b598) at pulsecore/thread-posix.c:83
#9  0xb7b8df72 in start_thread (arg=0xa753eb40) at pthread_create.c:312
#10 0xb7ac3f8e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:129

Are they related or any clue to establish an understanding that if audio chain is empty of buffers then the video sink will wait forever ?
Any link or suggestion to build this understanding and analysis will be very helpful.

Thanks,

--Nitin





On Thu, Jun 8, 2017 at 2:18 PM, Nitin Mahajan <[hidden email]> wrote:
Missed to mention that the logs are captured with gst-validate-1.0 application. 
The scenario is switching track switch every 5seconds. 
the adaptive demux plugin outputs video and audio ES.
The sequence of operations done on adaptive demux plugin are as follows:
On audio src pad of demux:
Flush buffered audio packets:
- push gst_event_new_flush_start()
- push gst_event_new_flush_stop(FALSE)
- push new segment event 
- push buffers with the time stamps corresponding to the current presentation time
It works 2-3 time but then freezes.
--Nitin

On Thu, Jun 8, 2017 at 2:13 PM, Nitin Mahajan <[hidden email]> wrote:
Many Thanks. Attached basesink debug out captured when a/v freeze (freeze_basesink_traces.txt.zip)
Also captured some info for the blocking thread below.
While analysing various threads, it seems that audio queue is empty post flushing the audio chain (upon track change) and the video queue is full.
The thread to push next video packet is blocked on multiqueue waiting for the space to be available to enqueue but video renderer is blocked.
Due to this, complete video pipeline is full and buffers not being consumed and the thread responsible to push audio packets from the demux in pipeline is blocked.
Could it be the possible reason that audio queue is empty post flush and video queue is full, so basesink will wait forever triggering a deadlock ..
Also, may request for ant reference implementation where on the fly flush start and stop is done on one of the decode chain (like audio decode chain in this case)..

(gdb) up
#4  0xb7d8d76d in gst_system_clock_id_wait_jitter_unlocked (clock=0xad1672e0, entry=0xac732140, jitter=0xa09fe650, restart=1) at gstsystemclock.c:684
684          pollret = gst_poll_wait (sysclock->priv->timer, diff);
(gdb) p diff
$1 = 16433000
(gdb) up
#5  0xb7d3c1aa in gst_clock_id_wait (id=0xac732140, jitter=jitter@entry=0xa09fe650) at gstclock.c:539
539      res = cclass->wait (clock, entry, jitter);
(gdb) p jitter
$2 = (GstClockTimeDiff *) 0xa09fe650
(gdb) p *jitter
$3 = -16433000
(gdb) p *clock
$4 = {object = {object = {g_type_instance = {g_class = 0xac708098}, ref_count = 46, qdata = 0x0}, lock = {p = 0xac705a28, i = {2893044264, 0}}, name = 0xac705a68 "GstPulseSinkClock", parent = 0x0, 
    flags = 752, control_bindings = 0x0, control_rate = 100000000, last_sync = 18446744073709551615, _gst_reserved = 0x0}, priv = 0xad167278, _gst_reserved = {0x0, 0x0, 0x0, 0x0}}
(gdb) p *entry
$6 = {refcount = 1, clock = 0xad1672e0, type = GST_CLOCK_ENTRY_SINGLE, time = 25083000000, interval = 18446744073709551615, status = GST_CLOCK_BUSY, func = 0x0, user_data = 0x0, destroy_data = 0x0, 
  unscheduled = 0, woken_up = 0, _gst_reserved = {0x0, 0x0, 0x0, 0x0}}
(gdb) 
$7 = {refcount = 1, clock = 0xad1672e0, type = GST_CLOCK_ENTRY_SINGLE, time = 25083000000, interval = 18446744073709551615, status = GST_CLOCK_BUSY, func = 0x0, user_data = 0x0, destroy_data = 0x0, 
  unscheduled = 0, woken_up = 0, _gst_reserved = {0x0, 0x0, 0x0, 0x0}}
(gdb) up
#6  0xb77f1930 in gst_base_sink_wait_clock (sink=sink@entry=0xad30c6d8, time=<optimized out>, jitter=jitter@entry=0xa09fe650) at gstbasesink.c:2104
2104      ret = gst_clock_id_wait (sink->priv->cached_clock_id, jitter);
(gdb) p jitter
$8 = (GstClockTimeDiff *) 0xa09fe650
(gdb) p *jitter
$9 = -16433000
(gdb) up
#7  0xb77f3317 in gst_base_sink_do_sync (basesink=basesink@entry=0xad30c6d8, obj=obj@entry=0xa0018220, late=late@entry=0xa09fe7d8, step_end=step_end@entry=0xa09fe7dc) at gstbasesink.c:2491
2491      status = gst_base_sink_wait_clock (basesink, stime, &jitter);
(gdb) p stime
$10 = <optimized out>
(gdb) p jitter
$11 = -16433000
(gdb) p *basesink
$12 = {element = {object = {object = {g_type_instance = {g_class = 0xad30b220}, ref_count = 4, qdata = 0xac7763b8}, lock = {p = 0xad30c8b8, i = {2905655480, 0}}, name = 0xad32f5f0 "irdetovideosink0", 
      parent = 0xac72b048, flags = 32, control_bindings = 0x0, control_rate = 100000000, last_sync = 18446744073709551615, _gst_reserved = 0x0}, state_lock = {p = 0xad30c8d8, i = {0, 0}}, state_cond = {
      p = 0xad30c8f8, i = {0, 0}}, state_cookie = 3, target_state = GST_STATE_PLAYING, current_state = GST_STATE_PLAYING, next_state = GST_STATE_VOID_PENDING, pending_state = GST_STATE_VOID_PENDING, 
    last_return = GST_STATE_CHANGE_SUCCESS, bus = 0xac706298, clock = 0xad1672e0, base_time = 0, start_time = 0, numpads = 1, pads = 0xadd5c5c0, numsrcpads = 0, srcpads = 0x0, numsinkpads = 1, 
    sinkpads = 0xadd5c5d0, pads_cookie = 1, _gst_reserved = {0x0, 0x0, 0x0, 0x0}}, sinkpad = 0xad1664f8, pad_mode = GST_PAD_MODE_PUSH, offset = 0, can_activate_pull = 0, can_activate_push = 1, 
  preroll_lock = {p = 0xad30ca78, i = {2905655928, 0}}, preroll_cond = {p = 0xad30ca98, i = {0, 0}}, eos = 0, need_preroll = 0, have_preroll = 0, playing_async = 0, have_newsegment = 1, segment = {
    flags = GST_SEGMENT_FLAG_NONE, rate = 1, applied_rate = 1, format = GST_FORMAT_TIME, base = 0, offset = 0, start = 0, stop = 18446744073709551615, time = 0, position = 25124666592, 
    duration = 18446744073709551615, _gst_reserved = {0x0, 0x0, 0x0, 0x0}}, clock_id = 0xac732140, sync = 1, flushing = 0, running = 0, max_lateness = 20000000, priv = 0xad30c500, _gst_reserved = {
    0x0 <repeats 20 times>}}
(gdb) 

--Nitin


On Thu, Jun 8, 2017 at 12:43 PM, Arun Raghavan <[hidden email]> wrote:
On Thu, 1 Jun 2017, at 10:40 PM, Nitin Mahajan wrote:
> Please note that flush stop is passed with False so that the clock timing
> is not reset purposely as thereafter buffer with valid timestamp of
> different language is pushed. What could trigger queue thread to block /
> wait inside video renderer basesink from audio sink pulse audio. The test
> is running on x86 with gstreamer 1.4.5. Suggestion/pointer will be very
> helpful as there is not much clue from traces to establish design
> relationship between audio and video sinks and blocking wait.

It might be a good idea to look at the basesink debug output -- what is
the timestamp that you're pushing out, and what is the timestamp against
which that the sink is waiting?

-- Arun
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel




_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Thanks & Regards
--Nitin
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: a/v freeze while flushing audio chain of pipeline in running state (clock and renderer sync related)

Nitin Mahajan
more observations on the possible cause that may cause 
ret = gst_clock_id_wait (sink->priv->cached_clock_id, jitter to block.

gst_pulsesink_get_time (GstClock * clock, GstAudioBaseSink * sink)
  /* if we don't have enough data to get a timestamp, just return NONE, which
   * will return the last reported time */
  if (pa_stream_get_time (pbuf->stream, &time) < 0) {
    GST_DEBUG_OBJECT (psink, "could not get time");
    time = GST_CLOCK_TIME_NONE;
  } else
    time *= 1000;

The value returned by gst_pulsesink_get_time() is always constant after a/v freeze.
below code snippet has following comment:
  /* if we don't have enough data to get a timestamp, just return NONE, which
   * will return the last reported time */
  if (pa_stream_get_time (pbuf->stream, &time) < 0) {
    GST_DEBUG_OBJECT (psink, "could not get time");
    time = GST_CLOCK_TIME_NONE;
  } else
    time *= 1000;

Can it be concluded that since the audio chain is empty, gst_pulsesink_get_time() would always return same timestamp ?
And due to this behaviour, below 
  /* A blocking wait is performed on the clock. We save the ClockID
   * so we can unlock the entry at any time. While we are blocking, we
   * release the PREROLL_LOCK so that other threads can interrupt the
   * entry. */
ret = gst_clock_id_wait (sink->priv->cached_clock_id, jitter); 
would wait forever until the gst_pulsesink_get_time() + jitter is not reached by gst_clock_id_wait.

with above hypothesis, probably freeze could be unblocked if audio sink starts receiving packets 
and it will allow gst_pulsesink_get_time() to start increment ?

Kindly share your opinion and expert insights for better understanding of the clocks and synchronisation,

Thanks
--Nitin

On Thu, Jun 8, 2017 at 4:32 PM, Nitin Mahajan <[hidden email]> wrote:
it's blocked on ret = gst_clock_id_wait (sink->priv->cached_clock_id, jitter);
  /* A blocking wait is performed on the clock. We save the ClockID
   * so we can unlock the entry at any time. While we are blocking, we
   * release the PREROLL_LOCK so that other threads can interrupt the
   * entry. */
...
res = cclass->wait (clock, entry, jitter);
...

what could cause the GstSystemClock to wait forever inside:
      /* now wait on the entry, it either times out or the fd is written. The
       * status of the entry is only BUSY around the poll. */
      pollret = gst_poll_wait (sysclock->priv->timer, diff);

This part is not clear especially clocks and synchronisation management.
Note that the clock ID displays name as "GstPulseSinkClock". This is not clear.
Is it like Gstreamer system clock is linked with PulseSink ?

We also noticed earlier that there is one thread 
Thread 
#0  0xb7fdd428 in __kernel_vsyscall ()
#1  0xb7ab501b in poll () at ../sysdeps/unix/syscall-template.S:81
#2  0xac6e5bbd in poll (__timeout=1500, __nfds=2, __fds=0xa6c02d68) at /usr/include/i386-linux-gnu/bits/poll2.h:46
#3  poll_func (ufds=0xa6c02d68, nfds=2, timeout=1500, userdata=0xac70b508) at pulse/thread-mainloop.c:69
#4  0xac6d44ee in pa_mainloop_poll (m=m@entry=0xac70b460) at pulse/mainloop.c:850
#5  0xac6d4d1d in pa_mainloop_iterate (m=m@entry=0xac70b460, block=block@entry=1, retval=retval@entry=0x0) at pulse/mainloop.c:932
#6  0xac6d4df4 in pa_mainloop_run (m=0xac70b460, retval=retval@entry=0x0) at pulse/mainloop.c:950
#7  0xac6e5b5c in thread (userdata=0xac705660) at pulse/thread-mainloop.c:88
#8  0xac68e4dd in internal_thread_func (userdata=0xac70b598) at pulsecore/thread-posix.c:83
#9  0xb7b8df72 in start_thread (arg=0xa753eb40) at pthread_create.c:312
#10 0xb7ac3f8e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:129

Are they related or any clue to establish an understanding that if audio chain is empty of buffers then the video sink will wait forever ?
Any link or suggestion to build this understanding and analysis will be very helpful.

Thanks,

--Nitin





On Thu, Jun 8, 2017 at 2:18 PM, Nitin Mahajan <[hidden email]> wrote:
Missed to mention that the logs are captured with gst-validate-1.0 application. 
The scenario is switching track switch every 5seconds. 
the adaptive demux plugin outputs video and audio ES.
The sequence of operations done on adaptive demux plugin are as follows:
On audio src pad of demux:
Flush buffered audio packets:
- push gst_event_new_flush_start()
- push gst_event_new_flush_stop(FALSE)
- push new segment event 
- push buffers with the time stamps corresponding to the current presentation time
It works 2-3 time but then freezes.
--Nitin

On Thu, Jun 8, 2017 at 2:13 PM, Nitin Mahajan <[hidden email]> wrote:
Many Thanks. Attached basesink debug out captured when a/v freeze (freeze_basesink_traces.txt.zip)
Also captured some info for the blocking thread below.
While analysing various threads, it seems that audio queue is empty post flushing the audio chain (upon track change) and the video queue is full.
The thread to push next video packet is blocked on multiqueue waiting for the space to be available to enqueue but video renderer is blocked.
Due to this, complete video pipeline is full and buffers not being consumed and the thread responsible to push audio packets from the demux in pipeline is blocked.
Could it be the possible reason that audio queue is empty post flush and video queue is full, so basesink will wait forever triggering a deadlock ..
Also, may request for ant reference implementation where on the fly flush start and stop is done on one of the decode chain (like audio decode chain in this case)..

(gdb) up
#4  0xb7d8d76d in gst_system_clock_id_wait_jitter_unlocked (clock=0xad1672e0, entry=0xac732140, jitter=0xa09fe650, restart=1) at gstsystemclock.c:684
684          pollret = gst_poll_wait (sysclock->priv->timer, diff);
(gdb) p diff
$1 = 16433000
(gdb) up
#5  0xb7d3c1aa in gst_clock_id_wait (id=0xac732140, jitter=jitter@entry=0xa09fe650) at gstclock.c:539
539      res = cclass->wait (clock, entry, jitter);
(gdb) p jitter
$2 = (GstClockTimeDiff *) 0xa09fe650
(gdb) p *jitter
$3 = -16433000
(gdb) p *clock
$4 = {object = {object = {g_type_instance = {g_class = 0xac708098}, ref_count = 46, qdata = 0x0}, lock = {p = 0xac705a28, i = {2893044264, 0}}, name = 0xac705a68 "GstPulseSinkClock", parent = 0x0, 
    flags = 752, control_bindings = 0x0, control_rate = 100000000, last_sync = 18446744073709551615, _gst_reserved = 0x0}, priv = 0xad167278, _gst_reserved = {0x0, 0x0, 0x0, 0x0}}
(gdb) p *entry
$6 = {refcount = 1, clock = 0xad1672e0, type = GST_CLOCK_ENTRY_SINGLE, time = 25083000000, interval = 18446744073709551615, status = GST_CLOCK_BUSY, func = 0x0, user_data = 0x0, destroy_data = 0x0, 
  unscheduled = 0, woken_up = 0, _gst_reserved = {0x0, 0x0, 0x0, 0x0}}
(gdb) 
$7 = {refcount = 1, clock = 0xad1672e0, type = GST_CLOCK_ENTRY_SINGLE, time = 25083000000, interval = 18446744073709551615, status = GST_CLOCK_BUSY, func = 0x0, user_data = 0x0, destroy_data = 0x0, 
  unscheduled = 0, woken_up = 0, _gst_reserved = {0x0, 0x0, 0x0, 0x0}}
(gdb) up
#6  0xb77f1930 in gst_base_sink_wait_clock (sink=sink@entry=0xad30c6d8, time=<optimized out>, jitter=jitter@entry=0xa09fe650) at gstbasesink.c:2104
2104      ret = gst_clock_id_wait (sink->priv->cached_clock_id, jitter);
(gdb) p jitter
$8 = (GstClockTimeDiff *) 0xa09fe650
(gdb) p *jitter
$9 = -16433000
(gdb) up
#7  0xb77f3317 in gst_base_sink_do_sync (basesink=basesink@entry=0xad30c6d8, obj=obj@entry=0xa0018220, late=late@entry=0xa09fe7d8, step_end=step_end@entry=0xa09fe7dc) at gstbasesink.c:2491
2491      status = gst_base_sink_wait_clock (basesink, stime, &jitter);
(gdb) p stime
$10 = <optimized out>
(gdb) p jitter
$11 = -16433000
(gdb) p *basesink
$12 = {element = {object = {object = {g_type_instance = {g_class = 0xad30b220}, ref_count = 4, qdata = 0xac7763b8}, lock = {p = 0xad30c8b8, i = {2905655480, 0}}, name = 0xad32f5f0 "irdetovideosink0", 
      parent = 0xac72b048, flags = 32, control_bindings = 0x0, control_rate = 100000000, last_sync = 18446744073709551615, _gst_reserved = 0x0}, state_lock = {p = 0xad30c8d8, i = {0, 0}}, state_cond = {
      p = 0xad30c8f8, i = {0, 0}}, state_cookie = 3, target_state = GST_STATE_PLAYING, current_state = GST_STATE_PLAYING, next_state = GST_STATE_VOID_PENDING, pending_state = GST_STATE_VOID_PENDING, 
    last_return = GST_STATE_CHANGE_SUCCESS, bus = 0xac706298, clock = 0xad1672e0, base_time = 0, start_time = 0, numpads = 1, pads = 0xadd5c5c0, numsrcpads = 0, srcpads = 0x0, numsinkpads = 1, 
    sinkpads = 0xadd5c5d0, pads_cookie = 1, _gst_reserved = {0x0, 0x0, 0x0, 0x0}}, sinkpad = 0xad1664f8, pad_mode = GST_PAD_MODE_PUSH, offset = 0, can_activate_pull = 0, can_activate_push = 1, 
  preroll_lock = {p = 0xad30ca78, i = {2905655928, 0}}, preroll_cond = {p = 0xad30ca98, i = {0, 0}}, eos = 0, need_preroll = 0, have_preroll = 0, playing_async = 0, have_newsegment = 1, segment = {
    flags = GST_SEGMENT_FLAG_NONE, rate = 1, applied_rate = 1, format = GST_FORMAT_TIME, base = 0, offset = 0, start = 0, stop = 18446744073709551615, time = 0, position = 25124666592, 
    duration = 18446744073709551615, _gst_reserved = {0x0, 0x0, 0x0, 0x0}}, clock_id = 0xac732140, sync = 1, flushing = 0, running = 0, max_lateness = 20000000, priv = 0xad30c500, _gst_reserved = {
    0x0 <repeats 20 times>}}
(gdb) 

--Nitin


On Thu, Jun 8, 2017 at 12:43 PM, Arun Raghavan <[hidden email]> wrote:
On Thu, 1 Jun 2017, at 10:40 PM, Nitin Mahajan wrote:
> Please note that flush stop is passed with False so that the clock timing
> is not reset purposely as thereafter buffer with valid timestamp of
> different language is pushed. What could trigger queue thread to block /
> wait inside video renderer basesink from audio sink pulse audio. The test
> is running on x86 with gstreamer 1.4.5. Suggestion/pointer will be very
> helpful as there is not much clue from traces to establish design
> relationship between audio and video sinks and blocking wait.

It might be a good idea to look at the basesink debug output -- what is
the timestamp that you're pushing out, and what is the timestamp against
which that the sink is waiting?

-- Arun
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel





_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Thanks & Regards
--Nitin
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: a/v freeze while flushing audio chain of pipeline in running state (clock and renderer sync related)

Nitin Mahajan
In reply to this post by Enrique Ocaña González
Hi Arun,
May request your opinion on the clocking and basesink logs. Shall I provide more details or enable more logs ?
Kindly suggest.

Hi Enrique.
Thanks. I shall check and get back. In case you have more comments based on my latest updates, please suggest.

--Nitin


On Thu, Jun 8, 2017 at 3:55 PM, Enrique Ocaña González <[hidden email]> wrote:
El jueves, 8 de junio de 2017 14:18:51 (CEST) Nitin Mahajan escribió:

> The sequence of operations done on adaptive demux plugin are as follows:
> On audio src pad of demux:
> Flush buffered audio packets:
> - push gst_event_new_flush_start()
> - push gst_event_new_flush_stop(FALSE)
> - push new segment event
> - push buffers with the time stamps corresponding to the current
> presentation time

First of all, sorry if my comment isn't completely related to the problem you
describe. Feel free to ignore it if it only adds noise to the discussion. I
think it might be relevant, though.

I'm having a similar problem in our use case: Two appsrcs pushing already
demuxed buffers (video/x-h264 and audio/mpeg) and the pipeline filled of
buffers. Then our app receives an update of some of the audio buffers (eg:
higher/lower quality) and needs to replace the buffers already in the pipeline
(playbin) in a seamless way (keeping the streamsynchronizer happy and the
video stream unaware of the replacement in the audio stream).

We do something similar to what you describe:

- We ask the pipeline for the current position (the PTS of the buffer being
rendered)
- We send flush start and stop (false) events
- Use gst_base_src_new_seamless_segment() to send a new [position, old_stop]
  segment (to convince appsrc to schedule the new buffers properly)
- Fix the segment emitted by appsrc using a probe, so it's again [position,
  old_stop] and clear flags (not reset-time).

The new replacement buffers start to fill the audio stream of the pipeline
again, the faad audio decoder does fine (thanks to the flush), and the audio
buffers reach the audio sink (GstOMXHdmiAudioSink). So far so good.

Then the problems start. For some reason, audiobasesink doesn't reconfigure
its audioringbuffer properly. Normally, I have a diff of ~40 between segdone
(the write position in the audioringbuffer) and readseg (the read position of
the data that goes to the actual audio device). That matches with segtotal
(the size of the audioringbuffer). However, after the flushes and new segment
the diff is ~15000 (clearly wrong, much higher than segtotal). This causes
gst_audio_ring_buffer_commit() (implemented by default_commit) to wait until
the diff is again less than segtotal. It waits forever.

This long audio sink wait stalls the audio stream and we start to lose audio.

If you want to know if you're having the same problem as we have, enable these
logs:

 GST_DEBUG='basesink:TRACE,ringbuffer:TRACE,audiosink:TRACE,\
           audiobasesink:TRACE,audioclock:DEBUG'

and grep for this kind of messages:

 cat /tmp/log.txt | grep default_commit | grep 'diff'

You should see the diff jump (from ~40 to ~15000) like this:

0:00:25.508332021  1277 0x72e0d4f0 DEBUG             ringbuffer
gstaudioringbuffer.c:1525:default_commit:<audiosinkringbuffer0> pointer at
1033, write to 1072-1340, diff 39, segtotal 40, segsize 1764, base 0
0:00:25.508480042  1277 0x72e0d4f0 DEBUG             ringbuffer
gstaudioringbuffer.c:1525:default_commit:<audiosinkringbuffer0> pointer at
1033, write to 1073-0, diff 40, segtotal 40, segsize 1764, base 0
0:00:25.615070250  1277 0x72e0d4f0 DEBUG             ringbuffer
gstaudioringbuffer.c:1525:default_commit:<audiosinkringbuffer0> pointer at
1034, write to 16497-1204, diff 15463, segtotal 40, segsize 1764, base 0
0:00:25.615523479  1277 0x72e0d4f0 DEBUG             ringbuffer
gstaudioringbuffer.c:1525:default_commit:<audiosinkringbuffer0> pointer at
1035, write to 16497-1204, diff 15462, segtotal 40, segsize 1764, base 0

I haven't found a solution yet and by now we're just applying a workaround on
the app side to avoid the need of replacing already buffered audio data.

Cheers.

--
Enrique Ocaña González
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Thanks & Regards
--Nitin
Loading...