Gstreamer HLS Help Request

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view

Gstreamer HLS Help Request

David Manpearl
I am trying to create an HLS streaming solution that generates arbitrary transport stream (.ts) chunks that will fit together and play properly inside mobile video players without using the .m3u8 DISCONTINUITY flag, but I am having difficulty offsetting the initial pts values of the first audio and video frames of each chunk relative to each other.

I am currently using "mpegtsmux" and have custom software to write the manifest files. I would like to modify the pts values by calling functions on the pipeline or its elements, but if this is impossible I am prepared to write a Gstreamer plugin, possibly based on mpegtsmux and with some of the features of hlssink and/or hlssink2.

1. How to offset the presentation time (pts) of the first video frame in a chunk relative to the first audio frame.
2. How to stop encoding after an exact number of audio packets have been written (i.e. 93 or 94 for 2-second chunks at 30fps with 48KHz audio).
3. How to determine if this can be done using existing plugins via modification of the parameters sent to the various GstElements instead of having to write a new plugin.
4. (optional) Understand how and why hlssink starts the first video frame of the first chunk in a sequence with pts value 3600.

It appears that the solution might use "gst_structure_get_clock_time()" for mpegtsmux or other video and/or audio GstElements. But, I am confused in the hlssink solution which reads the "running_time" into "current_running_time_start" at the beginning of each chunk and also writes the difference (running_time - current_running_time_start) into the .m3u8 Manifest file; but I cannot see or understand where it is getting written into the Pipeline for generation of the .ts media file component.

I am hoping/requesting any advice on how create a stand-alone .ts chunk in which the first video frame starts after the initial audio packet as per the "Goals" above. If anyone has pointers I would greatly appreciate your help.

Backstory: The goal for our users is to allow them to seek to an arbitrary location in a video that has not yet been completely transcoded, and have the server seek to the new location and re-start transcoding from there in such a way that we can come back at a later time and fill in the missing chunks. hlssink/hlssink2, OTOH, transcodes an entire video file into multiple chunks from start to finish in a single pass.

Sincerely and thank you
 - David

P.S. Prior to joining gstreamer-devel, I also posted a version of this question here:

gstreamer-devel mailing list
[hidden email]