I have custom source for cameras. This source lets the library in the background allocate n buffer
for the image data and passes these buffers in a create function with gst_buffer_new_wrapped_full to
the pipeline. Since pipelines might take a while I am using the GDestroyNotify callback to re-queue
the buffer only after downstream allows for it.
The problem I am having is that when the system experiences heavy load the pipeline seems to 'eat'
the buffers and does not return them, resulting in a reduced amount of buffer or no buffer at all
that I can fill.
Is there a recommended way of dealing with this situation?
Should I just allocate additional buffers in the hopes that the load goes away?
Can I somehow reclaim buffers that take to long?
Is my buffer management simply wrong and are there better ways to prevent this entirely?
You can insert queue element in the pipeline. This helps in two way
1. Each queue breaks pipeline into separate thread.
2. Buffer data if down stream is throttled. Play around with max-size*