I'm playing with imagefreeze and I have some issue with the presentation timestamp provided by imagefreeze.
I have a pipeline that can dynamically switch from camera (v4l) to an image (jpeg, png). I use the imagefreeze after the decodebin to provide a constant flow of frames to the renderer (vaapisink).
The problem is when I switch from v4l to decodebin/imagefreeze that imagefreeze will start a new segment from 0. The filesrc is setted to do-timestamp and before the imagefreeze frames are correctly timestamped.
The result is that imagefreeze will go in a busy loop to generate n-frames to catch up with the vaapisnik renderer time.
For example, I switch from v4l to decodebin/imagefreeze at 0:00:41.0 seconds, the image freeze will generate 41 seconds of data in less than a second.
This doesn't cause a video issue but this is a spike in cpu utilization and a bit of pollution for logging.
I work around the issue adding a pad probe (imagefreeze src) for the buffer and changing the PTS values of each buffer.
From my understanding of the imagefreeze the code ignores the base time or running time. Should imagefreeze take into account the base time and running time?
There is another option aside imagefreeze?
“There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.” - Tony Hoare