I am very much new to the whole GStreamer-thing, therefore I would be happy
if you could help me.
I need to stream a near-zero-latency videosignal from a webcam to a server
and them be able to view the stream on a website.
The webcam is linked to a Raspberry Pi 3, because there are
space-constraints on the mounting plattform. As a result of using the Pi I
really can't transcode the video on the Pi itself. Therefore I bought a
Logitech C920 Webcam, which is able to output a raw h264-stream.
By now I managed to view the stream on my windows-machine, but didn't manage
to get the whole website-thing working.
My understanding of this command is: Get the signal of video-device0, which
is a h264-stream with a certain width, height and framerate. Then pack it
into a rtp-package with a high enough mtu to have no artefacts and capsulate
the rtp-package into a udp-package and stream in to a ip+port.
My understanding of this command is: Receive a udp-package at port 5000.
Application says it is a rtp-package inside. I don't know what
rtpjitterbuffer does, but it reduces the latency of the video a bit.
rtph264depay says that inside the rtp is a h264-encoded stream. To get the
raw data, which fpsdisplaysink understands we need to decode the h264 signal
by the use of avdec_h264.
My next step was to change the receiver-sink to a local tcp-sink and output
that signal with the following html5-tag: <video width=320 height=240
<source src="<a href="http://localhost:#port#">http://localhost:#port#">
If I view the website I can't see the stream, but I can view the videodata,
which arrived as plain text, when I analyse the data.
Am I missing a videocontainer like MP4 for my video?
Am I wrong with decoding?
What am I doing wrong?
How can I improve my solution?
How would you solve that problem?