However if I try to do the same thing with the audio stream the pipeline
picks the wrong output and tries to convert the video into audio and fails.
I looked into running the pipeline queuing audio in an attempt to see if the
queue would pick up on the audio and send it through the pipeline, however,
when setting up a queue with the pipeline below, the pipeline would never
enter the playing state:
I then attempted to look at pulling out the audio separately using the same
method that the rtspsrc uses:
*gst-launch-1.0 --gst-debug=2 udpsrc uri=udp://0.0.0.0:54218 port=54218
caps="application/x-rtp, media=(string)audio, payload=(int)97" !
.recv_rtp_sink rtpsession .recv_rtp_src ! fakesink*
but that didnt give me anything worthwile either.
My ideal scenario is to get the queued pipeline working, however, beyond the
fact it never seems to enter a playing state, it also seems that the path
that the audio would go doesnt get picked up by fakesink and where the video
gets its own proxy(?) pads, as a part of rtpssrcdemux0; rtpssrcdemux1 gets
no such pads and fakesink as a result does not connect to the pipeline.
What am I doing wrong, I assume I am missing something fundamental here as I
imagine rtspsrc should be able to demux both sources and send them out to a
pipeline that I can then manipulate.