I am using gstreamer command to create a streamer pipeline & write that to
a video file using filesink. File is being created but the size of the file
is 350 MB for just 20 secs of streaming. When viewed the mp4 file in VLC,
it dosen't contains any information.
GStreamer is not like FFMPEG in that you can't set the video format by just
typing the desired file extension. What your current pipeline is doing is
saving the raw camera data into a file, which is why the file is so large
and probably nothing can play it.
What you probably want is this:
gst-launch-1.0 v4l2src extra-controls=c, exposure_auto=1,
exposure_absolute=166 ! videoconvert ! x264enc ! h264parse ! mp4mux !
filesink location=./test.mp4 -e
The source of the video, in this case from an attached webcam or
- extra-controls=c, exposure_auto=1, exposure_absolute=166
extra parameters used by v4l2src
I have no idea what the color format of the camera is. It could be
YUV, RGB, grayscale, etc. This says to automatically convert the color to
something that the H.264 encoder (x264enc) can use
A software H.264 encoder. If your machine has support for hardware
encoders, you can swap it out here. Just check the output of
"gst-inspect-1.0 | grep 264"
Depending on what's using the H.264 stream coming out of the H.264
encoder, it might have to be in a specific format. This tries to format it
so that mp4mux can use it. This might not be necessary.
The muxer that puts the H.264 stream into an MP4 container file
- filesink location=test.mp4
The element that actually saves the data to a file on your computer
The option '-e' tells the command line launcher (gst-launch-1.0) to send
an EOS signal through the pipeline when you type ctrl+c. This is because the
MP4 file needs to finalize before it closes. Otherwise, you won't be able to
play the file at all. Some muxers don't require this, like matroskamux
(.mkv). You can swap mp4mux for matroskamux and get the same result,
What is your video source and what is your environment? If you are capturing
a high-resolution camera on something like a raspberry pi, it'll probably be
slow because of limited bandwidth and processing capability.
Does the video play smoothly when just viewing the live feed? Do this with
extra-controls=c,exposure_auto=1,exposure_absolute=330 ! videoconvert !
You can also try specifying the framerate by adding the framerate cap, as in
video/x-raw,framerate=20/1 ! videoconvert ! autovideosink
Setting pipeline to PAUSED ...
DRM_IOCTL_I915_GEM_APERTURE failed: Invalid argument
Assuming 131072kB available aperture size.
May lead to reduced performance or incorrect rendering.
get chip id failed: -1 
param: 4, val: 0
Pipeline is live and does not need PREROLL ...
Got context from element 'autovideosink0-actual-sink-vaapi':
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element
Additional debug info:
failed to bind dma_buf to VA surface buffer
Execution ended after 0:00:00.482486967
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
I think this is a limitation with the Jetson Nano. Cameras that use the MIPI
port are much faster in my experience.
At least for now, you can lower the camera resolution until you get the FPS
you need, like 1280x720 or 640x480. If you have JetPack version 4.4 or
above, you can use nvv4l2camerasrc instead of v4l2src. Also, you might as
well take advantage of the hardware encoders on the Jetson Nano.
This is the pipeline I tested with. I have JetPack 4.3, so I can't use
nvv4l2camerasrc. You might have to change the source device to /dev/video0.
gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-raw, width=640,
height=480 ! nvvidconv ! omxh264enc ! mp4mux ! filesink location=test.mp4 -e