I notice that the bottleneck is that I max out 1 core of my CPU, while the
others are not in use, which prevents me from achieving the required frame
A solution would be to insert a queue to divide the pipeline into multiple
threads, but 90% of the pipeline is unexposed to me as it's integrated in
Therefore I wonder if there is some way I can spread out the workload of
gst-rtsp-server over multiple cores?
Simple thing to try is to add a queue (thread) after the payloader, and name is pay0 (even if it not a payloader, it should still work. That will split the load of packetization (which will likely copy the stream) in its own thread. That being said, adding buffer list support to this payloader, and sub-buffer support may be required for such a high bitrate case.
Thank you for your response, I didn't know I could also have a queue as pay0
Can you please tell me what you mean with buffer list support and sub buffer
list support in the payloader, and some general pointers on how to achieve
Are you saying you are looking forward working on this inside GStreamer code ? It's not related to your code, but upstream code (just to clarify). GstBufferList is a container for multiple GstBuffer that let you send multiple GstBuffer in one call. Many elements in the RTP stack is still missing for that, which is required for high bitrate.
I'm sorry, I thought you meant some parameters to be tweaked in the
I ended up achieving the desired frame rate by:
* adding queue after the rtpgstpay
* Increasing the MTU size of rtpgstpay (1400 by default, while max udp
packet size is 65,535 bytes). Taking an MTU of 65000 brought down the CPU
load by more than 50%.
Le mardi 05 novembre 2019 à 02:26 -0600, wilson1994 a écrit :
> I'm sorry, I thought you meant some parameters to be tweaked in the
> I ended up achieving the desired frame rate by:
> * adding queue after the rtpgstpay
> * Increasing the MTU size of rtpgstpay (1400 by default, while max udp
> packet size is 65,535 bytes). Taking an MTU of 65000 brought down the CPU
> load by more than 50%.
That's fair workaround, good catch. Of course these large packet will
get split on most network, but probably not a huge problem in that