Appreciate in advance for anyone who might have feedback or insight into
these issues that may be able to assist
We have a multicast pipeline constructed which will read 2 SDP files from
local storage (retrieved by us manually via HTTP) and parse them. This then
feeds the downstream audio and video bins. (See attached PDF for pipeline
). It gets striped down in the bins to h264 video and aac audio and muxed
together into an mp4 file that is written to disk and sliced at periodic
intervals. NOTE we are using python bindings from pygobject.
The pipeline elements are:
1) filesrc - to feed the SDP file to sdpdemux element.
2) sdpdemux - links the source pads to the video and audio bins for the RTP
3) queue - buffer for RTP elementary streams for video and audio.
4) rtpmp4gdepay (audio) vs rtph264depay (video) - depayload the RTP
5) aacparse (audio) vs h264parse (video) - parse out the AAC (audio) or H264
6) queue - buffer raw audio and video for feeding splitmuxsink
7) splitmuxsink - mux the audio and video together into mp4 file and slice
the mp4 at periodic intervals.
We have been working with this pipeline for a while without issue, but
recently noted 2 periodic failures in the pipeline:
1) Recordings fail to start sometimes
a. Pipeline is constructed and started and the following sequence of
errors are emitted from Gstreamer code:
i. Padname stream_0 is not unique in element sdpdemuxVid, not adding
ii. gst_pad_set_active: assertion 'GST_IS_PAD (pad)' failed
iii. gst_element_remove_pad: assertion 'GST_IS_PAD (pad)' failed
b. The error that gets spewed out on the event bus gives us:
i. gerror=GLib.Error('Internal data stream error.',
stopped, reason not-linked (-1)'
c. We have ruled out with packet capture analysis that the SSRC and PT
values in the packets are not changing which we theorized could have caused
2) Recordings stop prematurely sometimes
a. We get a gstreamer bus event that says it is an EOS (End of Stream)
event. Which I believe means a downstream pipeline element detected end of
stream, but from our packet captures it shows the stream is still ongoing.
We are trying to see if we can instrument something to give more information
as to who detected EOS event but has been challenging without degrading
performance with increased log level.