Withing one project I'v developed GStreamer FEC plugin to boost multimedia apps performance while streaming withing unreliable medium, e.g. WiFi network space. In an unreliable medium there are packet losses, so a stream is produced with redundant packets (generated with Reed-Solomon) those are used to recover original lost packets on a client side. FEC works on RTP layer, so it's payload/protocol irrelevant in common case.
To get more understanding please take a look on typical pipelines for streaming H.264 video:
Here there are two sockets between sender and receiver: SRC + FEC, so redundant packets are simply transmitted out-of-band. There are 50% redundancy and 50% packet losses emulation applied. While non-FEC-aware client becomes completely unusable under this condition, a FEC-aware one recovers all lost packets resulting initial video.
Now I'm considering packing several elementary video and audio streams into MREG-2 transport stream or MP4 container for streaming with FEC in the same manner. From the FEC implementation nature it's desirable:
1. To have all elementary streams units of the same size approximately, or, alternatively, force to a constant bit rate.
2. Multiplex elementary streams units in interleaving mode, i.e. A1 V1 A2 V2 ...
3. Place header information constantly over the stream and under given frequency (like 'fragment-duration'), so a client might connect at any time.
4. Minimize latency whenever it possible for real-time purpose.
How to achieve these things, e.g. for MPEG-2TS and MP4 formats?
I have played a bit with MPEG-2 TS and the question arise - how to hint 'rtpmp2tpay' to use its 'mtu' more smartly, i.e. to cover a whole number of NAL units and be not greater then specified limit (e.g. 300 bytes for WiFi app)?