I'm developing RTP streaming application for drones and trying to minimize
I've used data probes in my app in order to measure time required to process
each frame by each element. It checks PTS of each frame in order to
distinguish them and checks RTP seq num to do the same when they are sent
over the network. I'm getting pretty good numbers (50ms for encoding a
FullHD frame and spare msecs in h264parse, sending over network and
decoding, so it takes around 60ms for entire pipeline on sender and receiver
Here's example screenshot (however it was made when using videotestsrc)
I've added sync=false to ensure the system isn't waiting before rendering a
frame and also used gop-size=1 to be sure h264 elements don't need to wait
on next frames.
Also I'm using imx6 board with HDMI input as transmitter and Apollo Lake
PicoITX board fore receiving.
The problem is that real numbers are much higher and are around 500ms (when
I measure latency with led and camera). I'm streaming from GoPro and it
introduces 50ms of latency on HDMI.
So with measured numbers latency should be around 100-120ms, but not half of
I'm sure such low latency (around 100ms) is achievable because 3DR Solo
drone uses the same SoC and they achieved latency 100-120ms in their system.
(the only difference is in resolution, because they stream 720p but I stream
So am I missing something in my measurements? Is PTS safe way to measure
each frame latency? Are there some other places which can increase delay?
Can you isolate whether it is the sender or the receiver that is introducing
the latency? It could be the video encoder that is introducing the latency.
When we use x264enc, we tune it for zero latency
Run the pipeline with --gst-debug=basesink:6 and look for latency related
dump. That should give you pipeline latency. Alternatively, you can do a
latency query or simply read latency property of GstPipeline once the
pipeline goes to playing state