I have a process that is reading packets from an RTMP client and feeding them into a GST pipeline via an appsrc. It feeds them into the appsrc in an FLV container. From the appsrc it feeds into a decodebin, and then into a custom element where every frame is analyzed, and then re-encoded and sent to an RTMP server.
My problem is that the the number of frames per second that are sent to my custom element seems to be around ~17 when I would expect it to be 30. In fact, if I run the pipeline for 1 minute it ends up buffering an extra minute's worth of data and finishing that even after I've finished sending data to the appsrc. I'm not sure if it's exactly half (ie: ~17fps vs. ~30fps) but that's what it seems like.
As an interesting side note, I tried cutting the timestamps written to the appsrc (in the flv container) in half and i achieved the expected ~30fps without any latency or buffering... so I know that the software is capable of achieving those speeds - it seems that there is an artificial slowdown somewhere between the appsrc and my custom element.... perhaps in the decodebin.
Has anyone experienced anything like this before? I'm pretty certain that the timestamps that I'm writing are correct since they are basically just a pass-through calculation from an RTMP client.