I'm not quite sure if the title fits my problem but I'll try to explain:
I got an application which draws samples received from this pipeline
filesrc ! decodebin ! appsink
to OpenGL. My appsink is using max-buffers=1, drop=true and callbacks for
receiving prerolls and samples. In my example my application is using a
refresh rate of 30Hz (4K screen) and I'm playing the bunny video in 60Hz. I
shortened the video down to 30 seconds and play it in a loop. 30 seconds in
60Hz video makes it in total 1800 samples to receive and 900(=30Hz * 30
seconds) iterations in my application loop where i fetch the frames and draw
them to OpenGL. The loop is actually running with 30Hz. I can measure that
stuff and it works.
The problem I face is that mostly it seems that my application does not
receive the frames from the sink uniformly. In the application I count the
iterations which have samples to draw and which don't. The first run of the
video, before pause and seek to beginning and again play, usually has maybe
40 loop iterations which do not have any frame to display which makes it
900-40=860 frames to display in 30 seconds ~ 30Hz. Though the vast majority
of further replays has about 450 loop iterations which do not have any
samples to display which gives it a felt fps of 900-450=450 frames in 30
seconds ~ 15Hz.
That means that in between every second pair of iterations I receive no
sample from the appsink whereas in the other pair of iterations I receive
like 4 samples from the appsink from which 3 are discarded.
Is that some sort of timing issue? Can one somehow can this under control?