GStreamer: Difference between revisions
m (Minor fixes) |
|||
(19 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
GStreamer is a toolkit for building audio- and video-processing pipelines. A pipeline might stream video from a file to a network, or add an echo to a recording, or (most interesting to us) capture the output of a Video4Linux device. Gstreamer is most often used to power graphical applications such as [https://wiki.gnome.org/Apps/Videos Totem], but can also be used directly from the command-line. This page will explain |
GStreamer is a toolkit for building audio- and video-processing pipelines. A pipeline might stream video from a file to a network, or add an echo to a recording, or (most interesting to us) capture the output of a Video4Linux device. Gstreamer is most often used to power graphical applications such as [https://wiki.gnome.org/Apps/Videos Totem], but can also be used directly from the command-line. This page will explain how GStreamer is better than the alternatives, and how to build an encoder using its command-line interface. |
||
'''Before reading this page''', see [[V4L_capturing|V4L capturing]] to set your system up and create an initial recording. This page assumes you have already implemented the simple pipeline described there. |
'''Before reading this page''', see [[V4L_capturing|V4L capturing]] to set your system up and create an initial recording. This page assumes you have already implemented the simple pipeline described there. |
||
Line 17: | Line 17: | ||
* '''initialisation timing''': audio and video desynchronised by a certain amount from the first frame, usually caused by audio and video devices taking different amounts of time to initialise. For example, the first audio frame might be delivered to GStreamer 0.01 seconds after it was requested, but the first video frame might not be delivered until 0.7 seconds after it was requested, causing all video to be 0.6 seconds behind the audio |
* '''initialisation timing''': audio and video desynchronised by a certain amount from the first frame, usually caused by audio and video devices taking different amounts of time to initialise. For example, the first audio frame might be delivered to GStreamer 0.01 seconds after it was requested, but the first video frame might not be delivered until 0.7 seconds after it was requested, causing all video to be 0.6 seconds behind the audio |
||
** <code>mencoder</code>'s ''-delay'' option solves this by delaying the audio |
** <code>mencoder</code>'s ''-delay'' option solves this by delaying the audio |
||
* '''failure to encode''': frames that desynchronise gradually over time, usually caused by audio and video shifting relative each other when frames are dropped. For example if your CPU is not fast enough and sometimes drops a video frame, after 25 dropped frames the video will be one second ahead of the audio |
* '''failure to encode''': frames that desynchronise gradually over time, usually caused by audio and video shifting relative to each other when frames are dropped. For example if your CPU is not fast enough and sometimes drops a video frame, after 25 dropped frames the video will be one second ahead of the audio |
||
** <code>mencoder</code>'s ''-harddup'' option solves this by duplicating other frames to fill in the gaps |
** <code>mencoder</code>'s ''-harddup'' option solves this by duplicating other frames to fill in the gaps |
||
* '''source frame rate''': frames that aren't delivered at the advertised rate, usually caused by inaccurate clocks in the source hardware. For example, a low-cost webcam might deliver 25.01 video frames |
* '''source frame rate''': frames that aren't delivered at the advertised rate, usually caused by inaccurate clocks in the source hardware. For example, a low-cost webcam that advertises 25 FPS video and 48kHz audio might actually deliver 25.01 video frames and 47,999 audio frames per second, causing your audio and video to drift apart by a second or so per hour |
||
** video tapes are especially problematic here - if you've ever seen a VCR struggle during those few seconds between two recordings on a tape, you've seen them adjusting the tape speed to accurately track the source. Frame counts can vary enough during these periods to instantly desynchronise audio and video |
** video tapes are especially problematic here - if you've ever seen a VCR struggle during those few seconds between two recordings on a tape, you've seen them adjusting the tape speed to accurately track the source. Frame counts can vary enough during these periods to instantly desynchronise audio and video |
||
** <code>mencoder</code> has no solution for this problem |
** <code>mencoder</code> has no solution for this problem |
||
Line 25: | Line 25: | ||
GStreamer solves these problems by attaching a timestamp to each incoming frame based on the time GStreamer receives the frame. It can then mux the sources back together accurately using these timestamps, either by using a format that supports variable framerates or by duplicating frames to fill in the blanks: |
GStreamer solves these problems by attaching a timestamp to each incoming frame based on the time GStreamer receives the frame. It can then mux the sources back together accurately using these timestamps, either by using a format that supports variable framerates or by duplicating frames to fill in the blanks: |
||
# If you choose a container format that supports timestamps (e.g. Matroska), timestamps are automatically written to the file and used to vary the playback speed |
# If you choose a container format that supports timestamps (e.g. Matroska), timestamps are automatically written to the file and used to vary the playback speed |
||
# If you choose a container format that does not support timestamps (e.g. AVI), you must duplicate other frames to fill in the gaps by adding the <code>videorate</code> and <code>audiorate</code> plugins to the end of the relevant pipelines |
# If you choose a container format that does not support timestamps (e.g. AVI), you must duplicate other frames to fill in the gaps by adding the <code>videorate</code> and <code>audiorate</code> plugins to the end of the relevant pipelines |
||
To get accurate timestamps, specify the <code>do-timestamp=true</code> option for all your sources. This will ensure accurate timestamps are retrieved from the driver where possible. Sadly, many v4l2 drivers don't support timestamps - GStreamer will add timestamps for these drivers to stop audio and video drifting apart, but you will need to fix the initialisation timing yourself (discussed below). |
|||
Once you've encoded your video with GStreamer, you might want to ''transcode'' it with <code>ffmpeg</code>'s superior editing features. |
|||
=== Getting GStreamer === |
=== Getting GStreamer === |
||
GStreamer, its most common plugins and tools are available through your distribution's package manager. Most Linux distributions include both the legacy ''0.10'' and modern ''1.0'' release series - each has bugs that stop them from working on some hardware, and this page focuses mostly on the |
GStreamer, its most common plugins and tools are available through your distribution's package manager. Most Linux distributions include both the legacy ''0.10'' and modern ''1.0'' release series - each has bugs that stop them from working on some hardware, and this page focuses mostly on the modern ''1.0'' series. Converting between ''0.10'' and ''1.0'' is mostly just search-and-replace work (e.g. changing instances of <code>av</code> to <code>ff</code> because of the switch from <code>ffmpeg</code> to <code>libavcodec</code>). See [http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/chapter-porting-1.0.html the porting guide] for more. |
||
Other plugins are also available, such as <code>[http://gentrans.sourceforge.net/ GEntrans]</code> (used in some examples below). Google might help you find packages for your distribution, otherwise you'll need to download and compile them yourself. |
Other plugins are also available, such as <code>[http://gentrans.sourceforge.net/ GEntrans]</code> (used in some examples below). Google might help you find packages for your distribution, otherwise you'll need to download and compile them yourself. |
||
=== Using GStreamer with gst-launch === |
=== Using GStreamer with gst-launch-1.0 === |
||
<code>gst-launch</code> is the standard command-line interface to GStreamer. Here's the simplest pipline you can build: |
<code>gst-launch</code> is the standard command-line interface to GStreamer. Here's the simplest pipline you can build: |
||
Line 60: | Line 56: | ||
gst-inspect-1.0 fakesink |
gst-inspect-1.0 fakesink |
||
The images above are based on graphs created by GStreamer itself. Install [http://www.graphviz.org Graphviz] to build graphs of your pipelines. For faster viewing of those graphs, you may install xdot from [http://www.semicomplete.com/projects/xdotool/]: |
|||
mkdir gst-visualisations |
mkdir gst-visualisations |
||
GST_DEBUG_DUMP_DOT_DIR=gst-visualisations gst-launch-1.0 fakesrc ! fakesink |
GST_DEBUG_DUMP_DOT_DIR=gst-visualisations gst-launch-1.0 fakesrc ! fakesink |
||
xdot gst-visualisations/*-gst-launch.*_READY.dot |
|||
You may also compiles those graph to PNG, SVG or other supported formats: |
|||
dot -Tpng gst-visualisations/*-gst-launch.*_READY.dot > my-pipeline.png |
|||
To get graphs of the example pipelines below, prepend <code>GST_DEBUG_DUMP_DOT_DIR=gst-visualisations </code> to the <code>gst-launch</code> command. Run this command to generate a |
To get graphs of the example pipelines below, prepend <code>GST_DEBUG_DUMP_DOT_DIR=gst-visualisations </code> to the <code>gst-launch-1.0</code> command. Run this command to generate a graph of GStreamer's most interesting stage: |
||
xdot gst-visualisations/*-gst-launch.PLAYING_READY.dot |
|||
Remember to empty the <code>gst-visualisations</code> directory between runs. |
Remember to empty the <code>gst-visualisations</code> directory between runs. |
||
Line 74: | Line 74: | ||
=== Using GStreamer with entrans === |
=== Using GStreamer with entrans === |
||
<code>gst-launch</code> is the main command-line interface to GStreamer, available by default. But <code>entrans</code> is a bit smarter: |
<code>gst-launch-1.0</code> is the main command-line interface to GStreamer, available by default. But <code>entrans</code> is a bit smarter: |
||
* it provides partly-automated composition of GStreamer pipelines |
* it provides partly-automated composition of GStreamer pipelines |
||
* it allows you to cut streams, for example to capture for a predefined duration. That ensures headers are written correctly, which is not always the case if you close <code>gst-launch</code> by pressing Ctrl+C. To use this feature one has to insert a ''dam'' element after the first ''queue'' of each part of the pipeline |
* it allows you to cut streams, for example to capture for a predefined duration. That ensures headers are written correctly, which is not always the case if you close <code>gst-launch-1.0</code> by pressing Ctrl+C. To use this feature one has to insert a ''dam'' element after the first ''queue'' of each part of the pipeline |
||
== Building pipelines == |
== Building pipelines == |
||
Line 90: | Line 90: | ||
gst-launch-1.0 \ |
gst-launch-1.0 \ |
||
v4l2src |
v4l2src device=$VIDEO_DEVICE \ |
||
! $VIDEO_CAPABILITIES \ |
! $VIDEO_CAPABILITIES \ |
||
! avimux \ |
! avimux \ |
||
Line 102: | Line 102: | ||
gst-launch-1.0 \ |
gst-launch-1.0 \ |
||
alsasrc |
alsasrc device=$AUDIO_DEVICE \ |
||
! $AUDIO_CAPABILITIES \ |
! $AUDIO_CAPABILITIES \ |
||
! avimux \ |
! avimux \ |
||
Line 112: | Line 112: | ||
gst-launch-1.0 \ |
gst-launch-1.0 \ |
||
v4l2src |
v4l2src device=$VIDEO_DEVICE \ |
||
! $VIDEO_CAPABILITIES \ |
! $VIDEO_CAPABILITIES \ |
||
! mux. \ |
! mux. \ |
||
alsasrc |
alsasrc device=$AUDIO_DEVICE \ |
||
! $AUDIO_CAPABILITIES \ |
! $AUDIO_CAPABILITIES \ |
||
! mux. \ |
! mux. \ |
||
Line 134: | Line 134: | ||
gst-launch-1.0 \ |
gst-launch-1.0 \ |
||
v4l2src |
v4l2src device=$VIDEO_DEVICE \ |
||
! $VIDEO_CAPABILITIES \ |
! $VIDEO_CAPABILITIES \ |
||
! avimux \ |
! avimux \ |
||
Line 148: | Line 148: | ||
gst-launch-1.0 \ |
gst-launch-1.0 \ |
||
v4l2src |
v4l2src device=$VIDEO_DEVICE \ |
||
! $VIDEO_CAPABILITIES \ |
! $VIDEO_CAPABILITIES \ |
||
! videoconvert \ |
! videoconvert \ |
||
Line 154: | Line 154: | ||
! queue \ |
! queue \ |
||
! mux. \ |
! mux. \ |
||
alsasrc |
alsasrc device=$AUDIO_DEVICE \ |
||
! $AUDIO_CAPABILITIES \ |
! $AUDIO_CAPABILITIES \ |
||
! audioconvert \ |
! audioconvert \ |
||
Line 169: | Line 169: | ||
gst-launch-1.0 -q -e \ |
gst-launch-1.0 -q -e \ |
||
v4l2src device=$VIDEO_DEVICE |
v4l2src device=$VIDEO_DEVICE \ |
||
! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 \ |
! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 \ |
||
! $VIDEO_CAPABILITIES \ |
! $VIDEO_CAPABILITIES \ |
||
Line 176: | Line 176: | ||
! progressreport update-freq=1 \ |
! progressreport update-freq=1 \ |
||
! mux. \ |
! mux. \ |
||
alsasrc device=$AUDIO_DEVICE |
alsasrc device=$AUDIO_DEVICE \ |
||
! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 \ |
! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 \ |
||
! $AUDIO_CAPABILITIES \ |
! $AUDIO_CAPABILITIES \ |
||
Line 183: | Line 183: | ||
! mux. \ |
! mux. \ |
||
matroskamux name=mux min-index-interval=1000000000 \ |
matroskamux name=mux min-index-interval=1000000000 \ |
||
! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 \ |
|||
! filesink location=test-$( date --iso-8601=seconds ).mkv |
! filesink location=test-$( date --iso-8601=seconds ).mkv |
||
This creates a file using FLAC audio and x264 video in lossless mode, muxed into in a Matroska container. Because we used <code>speed-preset=ultrafast</code>, the buffers should just smooth out the flow of frames through the pipelines. |
This creates a file using FLAC audio and x264 video in lossless mode, muxed into in a Matroska container. Because we used <code>speed-preset=ultrafast</code>, the buffers should just smooth out the flow of frames through the pipelines. Even though the buffers are set to the maximum possible size, <code>speed-preset=veryslow</code> would eventually fill the video buffer and start dropping frames. |
||
Some other things to note about this pipeline: |
Some other things to note about this pipeline: |
||
* <code>quantizer=0</code> sets the video codec to lossless mode (~30GB/hour). Anything up to <code>quantizer=18</code> should not lose information visible to the human eye, and will produce much smaller files (~10GB/hour) |
|||
* [https://trac.ffmpeg.org/wiki/Encode/H.264 FFmpeg's H.264 page] includes a useful discussion of speed presets (both programs use the same underlying library) |
* [https://trac.ffmpeg.org/wiki/Encode/H.264 FFmpeg's H.264 page] includes a useful discussion of speed presets (both programs use the same underlying library) |
||
* <code>quantizer=0</code> sets the video codec to lossless mode (~30GB/hour). Anything up to <code>quantizer=18</code> should not lose information visible to the human eye, and will produce much smaller files (~10GB/hour) |
|||
* The Matroska format supports variable framerates, which can be useful for VHS tapes that might not deliver the same number of frames each second |
|||
* <code>min-index-interval=1000000000</code> improves seek times by telling the Matroska muxer to create one ''cue data'' entry per second of playback. Cue data is a few kilobytes per hour, added to the end of the file when encoding completes. If you try to watch your Matroska video while it's being recorded, it will take a long time to skip forward/back because the cue data hasn't been written yet |
* <code>min-index-interval=1000000000</code> improves seek times by telling the Matroska muxer to create one ''cue data'' entry per second of playback. Cue data is a few kilobytes per hour, added to the end of the file when encoding completes. If you try to watch your Matroska video while it's being recorded, it will take a long time to skip forward/back because the cue data hasn't been written yet |
||
Line 206: | Line 206: | ||
'''Check your CPU load'''. When GStreamer uses 100% CPU, it may need to drop frames to keep up. |
'''Check your CPU load'''. When GStreamer uses 100% CPU, it may need to drop frames to keep up. |
||
* If frames are dropped occasionally when CPU usage spikes to 100%, add a (larger) buffer to help smooth things out. |
* If frames are dropped occasionally when CPU usage spikes to 100%, add a (larger) buffer to help smooth things out. |
||
** this can be a source's internal buffer (e.g. |
** this can be a source's internal buffer (e.g. ''alsasrc buffer-time=2000000''), or it can be an extra buffering step in your pipeline (''! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0'') |
||
* If frames are dropped when other processes have high CPU load, consider using [https://en.wikipedia.org/wiki/Nice_(Unix) nice] to make sure encoding gets CPU priority |
* If frames are dropped when other processes have high CPU load, consider using [https://en.wikipedia.org/wiki/Nice_(Unix) nice] to make sure encoding gets CPU priority |
||
* If frames are dropped regularly, use a different codec, change the parameters, lower the resolution, or otherwise choose a less resource-intensive solution |
* If frames are dropped regularly, use a different codec, change the parameters, lower the resolution, or otherwise choose a less resource-intensive solution |
||
Line 225: | Line 225: | ||
* Snow at the start of a recording is just plain ugly. To get black input instead from a VCR, use the remote control to change the input source before you start recording |
* Snow at the start of a recording is just plain ugly. To get black input instead from a VCR, use the remote control to change the input source before you start recording |
||
=== |
=== Investigating bugs in GStreamer === |
||
GStreamer comes with a extensive tracing system that let you figure-out the problems. Yet, you often need to understand the internals of GStreamer to be able to read those traces. You should read this [https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gst-running.html documentation page] for the basic of how the tracing system works. When something goes wrong you should: |
|||
Sadly, you will probably run into several GStreamer bugs while creating a pipeline. Worse, the debugging information usually isn't enough to diagnose problems. Debugging usually goes something like this: |
|||
# try and see if there is a useful error message by enabling the ERROR debug level, <code>GST_DEBUG=2 gst-launch-1.0</code> |
|||
# see that <code>gst-launch</code> is failing to initialise, and gives no useful error message |
|||
# try similar pipelines - |
# try similar pipelines - reducing to its most minimal form, and add more elements until you can reproduce the issue. |
||
# as you are most likely having issue with V4L2 element, you may enable full v4l2 traces using <code>GST_DEBUG="v4l2*:7,2" gst-launch-1.0</code>. |
|||
#* if possible, start with the working version and make one change at a time until you identify a single change that triggers the problem |
|||
# run the simplest failing pipeline with <code>--gst-debug=3</code> (or higher if necessary) |
|||
# find an error message that looks relevant, search the Internet for information about it |
# find an error message that looks relevant, search the Internet for information about it |
||
# try more variations based on what you learnt, until you eventually find something that works |
# try more variations based on what you learnt, until you eventually find something that works |
||
# ask on Freenode #gstreamer or through [mailto:gstreamer-devel@lists.freedesktop.org GStreamer Mailing List] |
|||
# if you think you found a bug, you should report it through [https://bugzilla.gnome.org/enter_bug.cgi?product=GStreamer Gnome Bugzilla] |
|||
For example, at the time of writing GStreamer 1.0 would fail to create a pipeline unless you specified ''format=UYVY''. Here's the process I went through to find that out (minus some wrong turns and silly mistakes): |
|||
<ol> |
|||
<li>Made a simple pipeline to encode video in GStreamer 1.0: |
|||
<pre>gst-launch-1.0 v4l2src device=$VIDEO_DEVICE ! video/x-raw,interlaced=true,width=720,height=576 ! autovideosink</pre> |
|||
* this opened a window, but never played any video frames into it |
|||
<li> Tried the same pipeline in GStreamer 0.10: |
|||
<pre>gst-launch-0.10 v4l2src device=$VIDEO_DEVICE ! video/x-raw-yuv,interlaced=true,width=720,height=576 ! autovideosink</pre> |
|||
* this worked, proving GStreamer was capable of processing video |
|||
<li> Reran the failing command with <code>--gst-debug=3</code>: |
|||
<pre>gst-launch-1.0 --gst-debug=3 v4l2src device=$VIDEO_DEVICE ! video/x-raw,interlaced=true,width=720,height=576 ! autovideosink</pre> |
|||
* this produced some error messages that didn't mean anything to me |
|||
<li> Searched on Google for <code>"gst_video_frame_map_id: failed to map video frame plane 1"</code> (with quotes)<br> |
|||
* this returned [http://lists.freedesktop.org/archives/gstreamer-devel/2015-February/051678.html a discussion of the problem] |
|||
<li> Read through that thread<br> |
|||
* Most of it went over my head, but I understood it was a GStreamer bug related to the YUV format |
|||
<li> Read the <code>v4l2src</code> description for supported formats: |
|||
<pre>gst-inspect-1.0 v4l2src | less</pre> |
|||
<li> Tried every possible format, made a note of which ones worked: |
|||
<pre>for FORMAT in RGB15 RGB16 BGR RGB BGRx BGRA xRGB ARGB GRAY8 YVU9 YV12 YUY2 UYVY Y42B Y41B NV12_64Z32 YUV9 I420 YVYU NV21 NV12 |
|||
do |
|||
echo FORMAT: $FORMAT |
|||
gst-launch-1.0 v4l2src device=$VIDEO_DEVICE ! video/x-raw,format=$FORMAT,interlaced=true,width=720,height=576 ! autovideosink |
|||
done</pre> |
|||
* ''YUY2'' and ''UYVY'' worked, the others all failed in different ways |
|||
<li> Searched on Google for information about ''YUY2'' and ''UYVY'', as well as ''YUV9'' and ''YUV12'', which seemed to be related |
|||
* eventually found [http://forum.videohelp.com/threads/10031-Which-format-to-use-24-bit-RGB-YUV2-YUV9-YUV12?s=92a0bbf59497ea5c21528e614e7ad1a6&p=38242&viewfull=1#post38242 a post saying they're all compatible] |
|||
<li> compared the output of the formats that worked to the GStreamer 0.10 baseline: |
|||
<pre>gst-launch-1.0 v4l2src device=$VIDEO_DEVICE ! video/x-raw, format=YUY2, interlaced=true, width=720, height=576 ! videoconvert ! x264enc ! matroskamux ! filesink location=1-yuy2.mkv |
|||
gst-launch-1.0 v4l2src device=$VIDEO_DEVICE ! video/x-raw, format=UYVY, interlaced=true, width=720, height=576 ! videoconvert ! x264enc ! matroskamux ! filesink location=1-uyvy.mkv |
|||
gst-launch-0.10 v4l2src device=$VIDEO_DEVICE ! video/x-raw-yuv, interlaced=true, width=720, height=576 ! x264enc ! matroskamux ! filesink location=10.mkv</pre> |
|||
* closed each after 5 seconds then played them side-by-side - they looked similar to my eye, but the ''1.0'' files played at ''720x629'' resolution |
|||
<li> Re-read the <code>v4l2src</code> description: |
|||
<pre>gst-inspect-1.0 v4l2src | less</pre> |
|||
* found the <code>pixel-aspect-ratio</code> setting |
|||
<li> arbitrarily chose <code>format=UYVY</code> and tried again with a defined pixel aspect ratio: |
|||
<pre>gst-launch-1.0 v4l2src device=$VIDEO_DEVICE pixel-aspect-ratio=1 \ |
|||
! video/x-raw, format=UYVY, interlaced=true, width=720, height=576 \ |
|||
! videoconvert \ |
|||
! x264enc \ |
|||
! matroskamux \ |
|||
! filesink location=1-uyvy.mkv</pre> |
|||
* it worked! |
|||
</ol> |
|||
== Sample pipelines == |
== Sample pipelines == |
||
Line 286: | Line 242: | ||
gst-launch-1.0 \ |
gst-launch-1.0 \ |
||
v4l2src |
v4l2src device=$VIDEO_DEVICE do-timestamp=true \ |
||
! $VIDEO_CAPABILITIES \ |
! $VIDEO_CAPABILITIES \ |
||
! videorate \ |
! videorate \ |
||
Line 295: | Line 251: | ||
! queue \ |
! queue \ |
||
! mux. \ |
! mux. \ |
||
alsasrc |
alsasrc device=$AUDIO_DEVICE \ |
||
! $AUDIO_CAPABILITIES \ |
! $AUDIO_CAPABILITIES \ |
||
! audiorate \ |
! audiorate \ |
||
Line 351: | Line 307: | ||
* It seems to be important that the ''video/x-raw-yuv,width=720,height=576,framerate=25/1,interlaced=true,aspect-ratio=4/3''-statement is after ''videorate'' as videorate seems to drop the aspect-ratio-metadata otherwise resulting in files with aspect-ratio 1 in theis headers. Those files are probably played back warped and programs like dvdauthor complain. |
* It seems to be important that the ''video/x-raw-yuv,width=720,height=576,framerate=25/1,interlaced=true,aspect-ratio=4/3''-statement is after ''videorate'' as videorate seems to drop the aspect-ratio-metadata otherwise resulting in files with aspect-ratio 1 in theis headers. Those files are probably played back warped and programs like dvdauthor complain. |
||
=== Bash script to record video tapes with entrans === |
|||
== Ready-made scripts == |
|||
For most use cases, you'll want to wrap GStreamer in a larger shell script. This script protects against several common mistakes during encoding. |
|||
Although no two use cases are the same, it can be useful to see scripts used by other people. These can fill in blanks and provide inspiration for your own work. |
|||
See also [[V4L_capturing/script|the V4L capturing script]] for a a wrapper that represents a whole workflow. |
|||
=== Bash script to record video tapes with GStreamer (work-in-progress) === |
|||
Note: as of August 2015, this script is still being fine-tuned. Come back in a month or two to see the final version. |
|||
This example encapsulates a whole workflow - encoding with GStreamer, transcoding with ffmpeg and opportunities to edit the audio by hand. The default GStreamer command is similar to [[GStreamer#High_quality_video|this]], and by default ffmpeg converts it to MPEG4 video and MP3 audio in an AVI container. |
|||
The script has been designed so most people should only need to edit the config file, and even includes a more usable version of the commands from [[GStreamer#Getting_your_device_capabilities|getting your device capabilities]]. In general, you should first run the script with <code>--init</code> to create the config file, then edit that file by hand with help from <code>--caps</code> and <code>--profile</code>, then record with <code>--record</code> and transcode with a generated <code>remaster</code> script. |
|||
Search the script for <code>CMD</code> to find the interesting bits. Although the script is quite complex, most of it is just fluff to improve progress information etc. |
|||
<nowiki>#!/bin/bash |
|||
# |
|||
# Encode a video using either the 0.1 or 1.0 series of GStreamer |
|||
# (each has bugs that break encoding on different cards) |
|||
# |
|||
# Also uses `v4l2-ctl` (from the v4l-utils package) to set the input source, |
|||
# and `ffmpeg` to remaster the file |
|||
# |
|||
# Approximate system requirements for maximum quality settings: |
|||
# * about 5-10GB disk space for every hour of the initial recording |
|||
# * about 4-8GB disk space for every hour of remastered recordings |
|||
# * 1.5GHz processor |
|||
HELP_MESSAGE="Usage: $0 --init |
|||
$0 --caps |
|||
$0 --profile |
|||
$0 --record <directory> |
|||
$0 --kill <directory> <timeout> |
|||
$0 --remaster <remaster-script> |
|||
Record a video into a directory (one directory per video). |
|||
--init create an initial ~/.v4l-record-scriptrc |
|||
please edit this file before your first recording |
|||
--caps show audio and video capabilities for your device |
|||
--profile update ~/.v4l-record-scriptrc with your system's noise profile |
|||
pause a tape or tune to a silent channel for the best profile |
|||
--record create a faithful recording in the specified directory |
|||
--kill stop the recording in <directory> after <timeout> |
|||
see \`man sleep\` for details about allowed time formats |
|||
--remaster create remastered recordings based on the initial recording |
|||
" |
|||
CONFIGURATION='# |
|||
# CONFIGURATION FOR GSTREAMER RECORD SCRIPT |
|||
# For more information, see http://www.linuxtv.org/wiki/index.php/GStreamer |
|||
# |
|||
# |
|||
# VARIABLES YOU NEED TO EDIT |
|||
# Every system and every use case is slightly different. |
|||
# Here are the things you will probably need to change: |
|||
# |
|||
# Set these based on your hardware/location: |
|||
VIDEO_DEVICE=${VIDEO_DEVICE:-/dev/video0} # `ls /dev/video*` for a list |
|||
AUDIO_DEVICE=${AUDIO_DEVICE:-hw:CARD=SAA7134,DEV=0} # `arecord -L` for a list |
|||
NORM=${NORM:-PAL} # (search Wikipedia for the exact norm in your country) |
|||
VIDEO_INPUT="${VIDEO_INPUT:-1}" # composite input - `v4l2-ctl --device=$VIDEO_DEVICE --list-inputs` for a list |
|||
# PAL video is approximately 720x576 resolution. VHS tapes have about half the horizontal quality, but this post convinced me to encode at 720x576 anyway: |
|||
# http://forum.videohelp.com/threads/215570-Sensible-resolution-for-VHS-captures?p=1244415#post1244415 |
|||
# Run `'"$0"' --caps` to find your supported width, height and bitrate: |
|||
SOURCE_WIDTH="${SOURCE_WIDTH:-720}" |
|||
SOURCE_HEIGHT="${SOURCE_HEIGHT:-576}" |
|||
AUDIO_BITRATE="${AUDIO_BITRATE:-32000}" |
|||
# For systems that do not automatically handle audio/video initialisation times: |
|||
AUDIO_DELAY="$AUDIO_DELAY" |
|||
# |
|||
# VARIABLES YOU MIGHT NEED TO EDIT |
|||
# These are defined in the script, but you can override them here if you need non-default values: |
|||
# |
|||
# set this to 1.0 to use the more recent version of GStreamer: |
|||
#GST_VERSION=0.10 |
|||
# Set these to alter the recording quality: |
|||
#GST_X264_OPTS="..." |
|||
#GST_FLAC_OPTS="..." |
|||
# Set these to control the audio/video pipelines: |
|||
#GST_QUEUE="..." |
|||
#GST_VIDEO_CAPS="..." |
|||
#GST_AUDIO_CAPS="..." |
|||
#GST_VIDEO_SRC="..." |
|||
#GST_AUDIO_SRC="..." |
|||
# ffmpeg has better remastering tools: |
|||
#FFMPEG_DENOISE_OPTS="..." # edit depending on your tape quality |
|||
#FFMPEG_VIDEO_OPTS="..." |
|||
#FFMPEG_AUDIO_OPTS="..." |
|||
# Reducing noise: |
|||
#GLOBAL_NOISE_AMOUNT=0.21 |
|||
# |
|||
# VARIABLES SET AUTOMATICALLY |
|||
# |
|||
# Once you have set the above, record a silent source (e.g. a paused tape or silent TV channel) |
|||
# then call '"$0"' --profile to build the global noise profile |
|||
' |
|||
# |
|||
# CONFIGURATION SECTION |
|||
# |
|||
CONFIG_SCRIPT="$HOME/.v4l-record-scriptrc" |
|||
[ -e "$CONFIG_SCRIPT" ] && source "$CONFIG_SCRIPT" |
|||
source <( echo "$CONFIGURATION" ) |
|||
GST_VERSION="${GST_VERSION:-0.10}" # or 1.0 |
|||
# `gst-inspect` has more information here too: |
|||
GST_X264_OPTS="interlaced=true pass=quant option-string=qpmin=0:qpmax=0 speed-preset=ultrafast tune=zerolatency byte-stream=true" |
|||
GST_FLAC_OPTS="" |
|||
GST_MKV_OPTS="min-index-interval=2000000000" # also known as "cue data", this makes seeking faster |
|||
# this doesn't really matter, and isn't required in 1.0: |
|||
case "$GST_VERSION" in |
|||
0.10) |
|||
GST_VIDEO_FORMAT="-yuv" |
|||
GST_AUDIO_FORMAT="-int" |
|||
;; |
|||
1.0) |
|||
GST_VIDEO_FORMAT="" |
|||
GST_AUDIO_FORMAT="" |
|||
;; |
|||
*) |
|||
echo "Please specify 'GST_VERSION' of '0.10' or '1.0', not '$GST_VERSION'" |
|||
exit 1 |
|||
;; |
|||
esac |
|||
# `gst-inspect-0.10 <element> | less -i` for a list of properties (e.g. `gst-inspect-0.10 v4l2src | less -i`): |
|||
GST_QUEUE="${GST_QUEUE:-queue max-size-buffers=0 max-size-time=0 max-size-bytes=0}" |
|||
GST_VIDEO_CAPS="${GST_VIDEO_CAPS:-video/x-raw$GST_VIDEO_FORMAT,interlaced=true,width=$SOURCE_WIDTH,height=$SOURCE_HEIGHT}" |
|||
GST_AUDIO_CAPS="${GST_AUDIO_CAPS:-audio/x-raw$GST_AUDIO_FORMAT,depth=16,rate=$AUDIO_BITRATE}" |
|||
GST_VIDEO_SRC="${GST_VIDEO_SRC:-v4l2src device=$VIDEO_DEVICE do-timestamp=true norm=$NORM ! $GST_QUEUE ! $GST_VIDEO_CAPS}" |
|||
GST_AUDIO_SRC="${GST_AUDIO_SRC:-alsasrc device=$AUDIO_DEVICE do-timestamp=true ! $GST_QUEUE ! $GST_AUDIO_CAPS}" |
|||
# `ffmpeg -h full` for more information: |
|||
FFMPEG_DENOISE_OPTS="hqdn3d=luma_spatial=6:2:luma_tmp=20" # based on an old VHS tape, with recordings in LP mode |
|||
FFMPEG_VIDEO_OPTS="${FFMPEG_VIDEO_OPTS:--flags +ilme+ildct -c:v mpeg4 -q:v 3 -vf il=d,$FFMPEG_DENOISE_OPTS,il=i,crop=(iw-10):(ih-14):3:0,pad=iw:ih:(ow-iw)/2:(oh-ih)/2}" |
|||
FFMPEG_AUDIO_OPTS="${FFMPEG_AUDIO_OPTS:--c:a libmp3lame -b:a 256k}" # note: for some reason, ffmpeg desyncs audio and video if "-q:a" is used instead of "-b:a" |
|||
# |
|||
# UTILITY FUNCTIONS |
|||
# You should only need to edit these if you're making significant changes to the way the script works |
|||
# |
|||
pluralise() { |
|||
case "$1" in |
|||
""|0) return |
|||
;; |
|||
1) echo "$1 $2, " |
|||
;; |
|||
*) echo "$1 ${2}s, " |
|||
;; |
|||
esac |
|||
} |
|||
gst_progress() { |
|||
START_TIME="$( date +%s )" |
|||
MESSAGE= |
|||
PROGRESS_NEWLINE= |
|||
while read HEAD TAIL |
|||
do |
|||
if [ "$HEAD" = "progressreport0" ] |
|||
then |
|||
NOW_TIME="$( date +%s )" |
|||
echo -n $'\r'"$( echo -n "$MESSAGE" | tr -c '' ' ' )"$'\r' |
|||
MESSAGE="$( echo "$TAIL" | { |
|||
read TIME PROCESSED SLASH TOTAL REPLY |
|||
progress_message "" "$START_TIME" "$TOTAL" "$PROCESSED" |
|||
echo "$MESSAGE" |
|||
})" |
|||
PROGRESS_NEWLINE=$'\n' |
|||
else |
|||
echo "$PROGRESS_NEWLINE$HEAD $TAIL" >&2 |
|||
echo "$MESSAGE" >&2 |
|||
PROGRESS_NEWLINE= |
|||
fi |
|||
done |
|||
echo -n $'\r'"$( echo -n "$MESSAGE" | tr -c '' ' ' )"$'\r' >&2 |
|||
} |
|||
ffmpeg_progress() { |
|||
MESSAGE="$1..." |
|||
echo -n $'\r'"$MESSAGE" >&2 |
|||
while IFS== read PARAMETER VALUE |
|||
do |
|||
if [ "$PARAMETER" = out_time_ms ] |
|||
then |
|||
echo -n $'\r'"$( echo -n "$MESSAGE" | tr -c '' ' ' )"$'\r' >&2 |
|||
if [ -z "$TOTAL_TIME_MS" -o "$TOTAL_TIME_MS" = 0 ] |
|||
then |
|||
case $SPINNER in |
|||
\-|'') SPINNER=\\ ;; |
|||
\\ ) SPINNER=\| ;; |
|||
\| ) SPINNER=\/ ;; |
|||
\/ ) SPINNER=\- ;; |
|||
esac |
|||
MESSAGE="$1 $SPINNER" |
|||
else |
|||
if [ -n "$VALUE" -a "$VALUE" != 0 ] |
|||
then |
|||
TIME_REMAINING=$(( ( $(date +%s) - $START_TIME ) * ( $TOTAL_TIME_MS - $VALUE ) / $VALUE )) |
|||
HOURS_REMAINING=$(( $TIME_REMAINING / 3600 )) |
|||
MINUTES_REMAINING=$(( ( $TIME_REMAINING - $HOURS_REMAINING*3600 ) / 60 )) |
|||
SECONDS_REMAINING=$(( $TIME_REMAINING - $HOURS_REMAINING*3600 - $MINUTES_REMAINING*60 )) |
|||
HOURS_REMAINING="$( pluralise $HOURS_REMAINING hour )" |
|||
MINUTES_REMAINING="$( pluralise $MINUTES_REMAINING minute )" |
|||
SECONDS_REMAINING="$( pluralise $SECONDS_REMAINING second )" |
|||
MESSAGE_REMAINING="$( echo "$HOURS_REMAINING$MINUTES_REMAINING$SECONDS_REMAINING" | sed -e 's/, $//' -e 's/\(.*\),/\1 and/' )" |
|||
MESSAGE="$1 $(( 100 * VALUE / TOTAL_TIME_MS ))% ETA: $( date +%X -d "$TIME_REMAINING seconds" ) (about $MESSAGE_REMAINING)" |
|||
fi |
|||
fi |
|||
echo -n $'\r'"$MESSAGE" >&2 |
|||
elif [ "$PARAMETER" = progress -a "$VALUE" = end ] |
|||
then |
|||
echo -n $'\r'"$( echo -n "$MESSAGE" | tr -c '' ' ' )"$'\r' >&2 |
|||
return |
|||
fi |
|||
done |
|||
} |
|||
# convert 00:00:00.000 to a count in milliseconds |
|||
parse_time() { |
|||
echo "$(( $(date -d "1970-01-01T${1}Z" +%s )*1000 + $( echo "$1" | sed -e 's/.*\.\([0-9]\)$/\100/' -e 's/.*\.\([0-9][0-9]\)$/\10/' -e 's/.*\.\([0-9][0-9][0-9]\)$/\1/' -e '/^[0-9][0-9][0-9]$/! s/.*/0/' ) ))" |
|||
} |
|||
# get the full name of the script's directory |
|||
set_directory() { |
|||
if [ -z "$1" ] |
|||
then |
|||
echo "$HELP_MESSAGE" |
|||
exit 1 |
|||
else |
|||
DIRECTORY="$( readlink -f "$1" )" |
|||
FILE="$DIRECTORY/$( basename "$DIRECTORY" )" |
|||
fi |
|||
} |
|||
# actual commands that do something interesting: |
|||
CMD_GST="gst-launch-$GST_VERSION" |
|||
CMD_FFMPEG="ffmpeg -loglevel 23 -nostdin" |
|||
CMD_SOX="nice -n +20 sox" |
|||
# |
|||
# MAIN LOOP |
|||
# |
|||
case "$1" in |
|||
-i|--i|--in|--ini|--init) |
|||
if [ -e "$CONFIG_SCRIPT" ] |
|||
then |
|||
echo "Please delete $CONFIG_SCRIPT if you want to recreate it" |
|||
else |
|||
echo "$CONFIGURATION" > "$CONFIG_SCRIPT" |
|||
echo "Please edit $CONFIG_SCRIPT to match your system" |
|||
fi |
|||
;; |
|||
-p|--p|--pr|--pro|--prof|--profi|--profil|--profile) |
|||
sed -i "$CONFIG_SCRIPT" -e '/^GLOBAL_NOISE_PROFILE=.*/d' |
|||
echo "GLOBAL_NOISE_PROFILE='$( '$CMD_GST' -q alsasrc device="$AUDIO_DEVICE" ! wavenc ! fdsink | sox -t wav - -n trim 0 1 noiseprof | tr '\n' '\t' )'" >> "$CONFIG_SCRIPT" |
|||
echo "Updated $CONFIG_SCRIPT with global noise profile" |
|||
;; |
|||
-c|--c|--ca|--cap|--caps) |
|||
{ |
|||
echo 'Audio capabilities:' >&2 |
|||
"$CMD_GST" --gst-debug=alsa:5 alsasrc device=$AUDIO_DEVICE ! fakesink 2> >( sed -ne '/returning caps\|src caps/ { s/.*\( returning caps \| src caps \)/\t/ ; s/; /\n\t/g ; p }' | sort >&2 ) | head -1 >/dev/null |
|||
sleep 0.1 |
|||
echo 'Video capabilities:' >&2 |
|||
"$CMD_GST" --gst-debug=v4l2:5,v4l2src:3 v4l2src device=$VIDEO_DEVICE ! fakesink 2> >( sed -ne '/probed caps:\|src caps/ { s/.*\(probed caps:\|src caps\) /\t/ ; s/; /\n\t/g ; p }' | sort >&2 ) | head -1 >/dev/null |
|||
} 2>&1 |
|||
;; |
|||
-r|--rec|--reco|--recor|--record) |
|||
# Build a pipeline with sources being encoded as MPEG4 video and FLAC audio, then being muxed into a Matroska container. |
|||
# FLAC and Matroska are used during encoding to ensure we don't lose much data between passes |
|||
set_directory "$2" |
|||
mkdir -p -- "$DIRECTORY" || exit |
|||
if [ -e '$FILE.pid' ] |
|||
then |
|||
echo "Already recording a video in this directory" |
|||
exit |
|||
fi |
|||
if [ -e "$FILE.mkv" ] |
|||
then |
|||
echo "Please delete the old $FILE.mkv before making a new recording" |
|||
exit 1 |
|||
fi |
|||
[ -n "$VIDEO_INPUT" ] && v4l2-ctl --device="$VIDEO_DEVICE" --set-input $VIDEO_INPUT > >( grep -v '^Video input set to' ) |
|||
date +"%c: started recording $FILE.mkv" |
|||
# trap keyboard interrupt (control-c) |
|||
trap kill_gstreamer 0 SIGHUP SIGINT SIGQUIT SIGABRT SIGKILL SIGALRM SIGSEGV SIGTERM |
|||
kill_gstreamer() { [ -e "/proc/$(< "$FILE.pid" )" ] && kill -s 2 "$(< "$FILE.pid" )" ; } |
|||
sh -c "echo \$\$ > '$FILE.pid' && \ |
|||
exec $CMD_GST -q -e \ |
|||
$GST_VIDEO_SRC ! x264enc $GST_X264_OPTS ! progressreport update-freq=1 ! mux. \ |
|||
$GST_AUDIO_SRC ! flacenc $GST_FLAC_OPTS ! mux. \ |
|||
matroskamux name=mux $GST_MKV_OPTS ! filesink location='$FILE.mkv'" \ |
|||
2> >( grep -v 'Source ID [0-9]* was not found when attempting to remove it' ) \ |
|||
| \ |
|||
while read FROM TIME REMAINDER |
|||
do [ "$FROM" = progressreport0 ] && echo -n $'\r'"$( date +"%c: recorded ${TIME:1:8} - press ctrl+c to finish" )" >&2 |
|||
done |
|||
trap '' 0 SIGHUP SIGINT SIGQUIT SIGABRT SIGKILL SIGALRM SIGSEGV SIGTERM |
|||
echo >&2 |
|||
date +"%c: finished recording $FILE.mkv" |
|||
rm -f "$FILE.pid" |
|||
cat <<EOF > "$FILE-remaster.sh" |
|||
#!$0 --remaster |
|||
# |
|||
# The original $( basename $FILE ).mkv accurately represents the source. |
|||
# If you would like to get rid of imperfections in the source (e.g. |
|||
# splitting it into segments), edit then run this file. |
|||
# |
|||
# *** REMASTERING OPTIONS *** |
|||
# |
|||
# AUDIO DELAY |
|||
# |
|||
# To add a period of silence at the beginning of the video, watch the .mkv |
|||
# file and decide how much silence you want. |
|||
# |
|||
# If you want to add a delay, set this variable to the duration in seconds |
|||
# (can be fractional): |
|||
# |
|||
audio_delay ${AUDIO_DELAY:-0.0} |
|||
# |
|||
# ORIGINAL FILE |
|||
# |
|||
# This is the original file to be remastered: |
|||
original "$( basename $FILE ).mkv" |
|||
# |
|||
# SEGMENTS |
|||
# |
|||
# You can split a video into one or more files. To create a remastered |
|||
# segment, add a line like this: |
|||
# |
|||
# segment "name of output file.avi" "start time" "end time" |
|||
# |
|||
# "start time"/"end time" is optional, and specifies the part of the file |
|||
# that will be used for the segment |
|||
# |
|||
# Here are some examples - remove the leading '#' to make one work: |
|||
# remaster the whole file in one go: |
|||
# segment "$( basename $FILE ).avi" |
|||
# split into two parts just over and hour: |
|||
# segment "$( basename $FILE ) part 1.avi" "00:00:00" "01:00:05" |
|||
# segment "$( basename $FILE ) part 2.avi" "00:59:55" "01:00:05" |
|||
EOF |
|||
chmod 755 "$FILE-remaster.sh" |
|||
cat <<EOF |
|||
To remaster this recording, see $FILE-remaster.sh |
|||
EOF |
|||
;; |
|||
-k|--k|--ki|--kil|--kill) |
|||
set_directory "$2" |
|||
if [ -e "$FILE.pid" ] |
|||
then |
|||
if [ -n "$3" ] |
|||
then |
|||
date +"Will \`kill -INT $(< "$FILE.pid" )\` at %X..." -d "+$( echo "$3" | sed -e 's/h/ hour/' -e 's/m/ minute/' -e 's/^\([0-9]*\)s\?$/\1 second/' )" \ |
|||
&& sleep "$3" \ |
|||
|| exit 0 |
|||
fi |
|||
kill -s 2 "$(< "$FILE.pid" )" \ |
|||
&& echo "Ran \`kill -INT $(< "$FILE.pid" )\` at %" |
|||
else |
|||
echo "Cannot kill - not recording in $DIRECTORY" |
|||
fi |
|||
;; |
|||
-m|--rem|--rema|--remas|--remast|--remaste|--remaster) |
|||
# we use ffmpeg and sox here, as they have better remastering tools and GStreamer doesn't offer any particular advantages |
|||
HAVE_REMASTERED= |
|||
# so people that don't understand shell scripts don't have to learn about variables: |
|||
audio_delay() { |
|||
if [[ "$1" =~ ^[0.]*$ ]] |
|||
then AUDIO_DELAY= |
|||
else AUDIO_DELAY="$1" |
|||
fi |
|||
} |
|||
original() { ORIGINAL="$1" ; } |
|||
# build a segment: |
|||
segment() { |
|||
SEGMENT_FILENAME="$1" |
|||
SEGMENT_START="$2" |
|||
SEGMENT_END="$3" |
|||
if [ -e "$SEGMENT_FILENAME" ] |
|||
then |
|||
read -p "Are you sure you want to delete the old $SEGMENT_FILENAME (y/N)? " |
|||
if [ "$REPLY" = "y" ] |
|||
then rm -f "$SEGMENT_FILENAME" |
|||
else return |
|||
fi |
|||
fi |
|||
# Calculate segment: |
|||
if [ -z "$SEGMENT_START" ] |
|||
then |
|||
SEGMENT_START_OPTS= |
|||
SEGMENT_END_OPTS= |
|||
else |
|||
SEGMENT_START_OPTS="-ss $SEGMENT_START" |
|||
SEGMENT_END_OPTS="$(( $( parse_time "$SEGMENT_END" ) - $( parse_time "$SEGMENT_START" ) ))"; |
|||
TOTAL_TIME_MS="${SEGMENT_END_OPTS}000" # initial estimate, will calculate more accurately later |
|||
SEGMENT_END_OPTS="-t $( echo "$SEGMENT_END_OPTS" | sed -e s/\\\([0-9][0-9][0-9]\\\)$/.\\\1/ )000" |
|||
fi |
|||
AUDIO_FILE="${SEGMENT_FILENAME/\.*/.wav}" |
|||
CURRENT_STAGE=1 |
|||
if [ -e "$AUDIO_FILE" ] |
|||
then STAGE_COUNT=2 |
|||
else STAGE_COUNT=3 |
|||
fi |
|||
[ -e "$AUDIO_FILE" ] || echo "Edit audio file $AUDIO_FILE and rerun to include hand-crafted audio" |
|||
START_TIME="$( date +%s )" |
|||
while IFS== read PARAMETER VALUE |
|||
do |
|||
if [ "$PARAMETER" = frame ] |
|||
then FRAME=$VALUE |
|||
else |
|||
[ "$PARAMETER" = out_time_ms ] && OUT_TIME_MS="$VALUE" |
|||
echo $PARAMETER=$VALUE |
|||
fi |
|||
TOTAL_TIME_MS=$OUT_TIME_MS |
|||
FRAMERATE="${FRAME}000000/$OUT_TIME_MS" |
|||
done < <( $CMD_FFMPEG $SEGMENT_START_OPTS -i "$ORIGINAL" $SEGMENT_END_OPTS -vcodec copy -an -f null /dev/null -progress /dev/stdout < /dev/null ) \ |
|||
> >( ffmpeg_progress "$SEGMENT_FILENAME: $CURRENT_STAGE/$STAGE_COUNT calculating framerate" ) |
|||
CURRENT_STAGE=$(( CURRENT_STAGE + 1 )) |
|||
# Build audio file for segment: |
|||
MESSAGE= |
|||
if ! [ -e "$AUDIO_FILE" ] |
|||
then |
|||
START_TIME="$( date +%s )" |
|||
# Step one: extract audio |
|||
$CMD_FFMPEG -y -progress >( ffmpeg_progress "$SEGMENT_FILENAME: extracting audio" ) $SEGMENT_START_OPTS -i "$ORIGINAL" $SEGMENT_END_OPTS -vn -f wav >( |
|||
case "${AUDIO_DELAY:0:1}X" in # Step two: shift the audio according to the audio delay |
|||
X) |
|||
# no audio delay |
|||
cat |
|||
;; |
|||
-) |
|||
# negative audio delay - trim start |
|||
$CMD_SOX -V1 -t wav - -t wav - trim 0 "${AUDIO_DELAY:1}" |
|||
;; |
|||
*) |
|||
# positive audio delay - prepend silence |
|||
$CMD_SOX -t wav <( $CMD_SOX -n -r "$AUDIO_BITRATE" -c 2 -t wav - trim 0.0 "$AUDIO_DELAY" ) -t wav - |
|||
;; |
|||
esac | \ |
|||
\ |
|||
if [ -z "$GLOBAL_NOISE_PROFILE" ] # Step three: denoise based on the global noise profile, then normalise audio levels |
|||
then $CMD_SOX -t wav - "$AUDIO_FILE" norm -1 |
|||
else $CMD_SOX -t wav - "$AUDIO_FILE" noisered <( echo "$GLOBAL_NOISE_PROFILE" | tr '\t' '\n' ) "${GLOBAL_NOISE_AMOUNT:-0.21}" norm -1 |
|||
fi 2> >( grep -vF 'sox WARN wav: Premature EOF on .wav input file' ) |
|||
) < /dev/null |
|||
CURRENT_STAGE=$(( CURRENT_STAGE + 1 )) |
|||
fi |
|||
echo -n $'\r'"$( echo -n "$MESSAGE" | tr -c '' ' ' )"$'\r' >&2 |
|||
# Build video file for segment: |
|||
START_TIME="$( date +%s )" |
|||
$CMD_FFMPEG \ |
|||
-progress file://>( ffmpeg_progress "$SEGMENT_FILENAME: $CURRENT_STAGE/$STAGE_COUNT creating video" ) \ |
|||
$SEGMENT_START_OPTS -i "$ORIGINAL" \ |
|||
-i "$AUDIO_FILE" \ |
|||
-map 1:0 -map 0:1 \ |
|||
-r "$FRAMERATE" \ |
|||
$SEGMENT_END_OPTS \ |
|||
$FFMPEG_VIDEO_OPTS $FFMPEG_AUDIO_OPTS \ |
|||
"$SEGMENT_FILENAME" \ |
|||
< /dev/null |
|||
sleep 0.1 # quick-and-dirty way to ensure ffmpeg_progress finishes before we print the next line |
|||
echo "$SEGMENT_FILENAME saved" |
|||
HAVE_REMASTERED=true |
|||
} |
|||
SCRIPT_FILE="$( readlink -f "$2" )" |
|||
cd "$( dirname "$SCRIPT_FILE" )" |
|||
source "$SCRIPT_FILE" |
|||
if [ -z "$HAVE_REMASTERED" ] |
|||
then echo "Please specify at least one segment" |
|||
fi |
|||
;; |
|||
*) |
|||
echo "$HELP_MESSAGE" |
|||
esac</nowiki> |
|||
This script generates a video in two passes: first it records and builds statistics, then lets you analyse the output, then builds an optimised final version. |
|||
=== Bash script to record video tapes with entrans === |
|||
<nowiki>#!/bin/bash |
<nowiki>#!/bin/bash |
||
Line 1,008: | Line 428: | ||
* As setting of the inputs and settings of the capture device is only partly possible via GStreamer other tools are used. |
* As setting of the inputs and settings of the capture device is only partly possible via GStreamer other tools are used. |
||
* Adjust the settings to match your input sources, the recording volume, capturing saturation and so on. |
* Adjust the settings to match your input sources, the recording volume, capturing saturation and so on. |
||
==Further documentation resources== |
==Further documentation resources== |
Latest revision as of 14:01, 11 March 2018
GStreamer is a toolkit for building audio- and video-processing pipelines. A pipeline might stream video from a file to a network, or add an echo to a recording, or (most interesting to us) capture the output of a Video4Linux device. Gstreamer is most often used to power graphical applications such as Totem, but can also be used directly from the command-line. This page will explain how GStreamer is better than the alternatives, and how to build an encoder using its command-line interface.
Before reading this page, see V4L capturing to set your system up and create an initial recording. This page assumes you have already implemented the simple pipeline described there.
Introduction to GStreamer
No two use cases for encoding are quite alike. What's your preferred workflow? Is your processor fast enough to encode high quality video in real-time? Do you have enough disk space to store the raw video then process it after the fact? Do you want to play your video in DVD players, or is it enough that it works in your version of VLC? How will you work around your system's obscure quirks?
Use GStreamer if you want the best video quality possible with your hardware, and don't mind spending a weekend browsing the Internet for information.
Avoid GStreamer if you just want something quick-and-dirty, or can't stand programs with bad documentation and unhelpful error messages.
Why is GStreamer better at encoding?
GStreamer isn't as easy to use as mplayer
, and doesn't have as advanced editing functionality as ffmpeg
. But it has superior support for synchronising audio and video in disturbed sources such as VHS tapes. If you specify your input is (say) 25 frames per second video and 48,000Hz audio, most tools will synchronise audio and video simply by writing 1 video frame, 1,920 audio frames, 1 video frame and so on. There are at least three ways this can cause errors:
- initialisation timing: audio and video desynchronised by a certain amount from the first frame, usually caused by audio and video devices taking different amounts of time to initialise. For example, the first audio frame might be delivered to GStreamer 0.01 seconds after it was requested, but the first video frame might not be delivered until 0.7 seconds after it was requested, causing all video to be 0.6 seconds behind the audio
mencoder
's -delay option solves this by delaying the audio
- failure to encode: frames that desynchronise gradually over time, usually caused by audio and video shifting relative to each other when frames are dropped. For example if your CPU is not fast enough and sometimes drops a video frame, after 25 dropped frames the video will be one second ahead of the audio
mencoder
's -harddup option solves this by duplicating other frames to fill in the gaps
- source frame rate: frames that aren't delivered at the advertised rate, usually caused by inaccurate clocks in the source hardware. For example, a low-cost webcam that advertises 25 FPS video and 48kHz audio might actually deliver 25.01 video frames and 47,999 audio frames per second, causing your audio and video to drift apart by a second or so per hour
- video tapes are especially problematic here - if you've ever seen a VCR struggle during those few seconds between two recordings on a tape, you've seen them adjusting the tape speed to accurately track the source. Frame counts can vary enough during these periods to instantly desynchronise audio and video
mencoder
has no solution for this problem
GStreamer solves these problems by attaching a timestamp to each incoming frame based on the time GStreamer receives the frame. It can then mux the sources back together accurately using these timestamps, either by using a format that supports variable framerates or by duplicating frames to fill in the blanks:
- If you choose a container format that supports timestamps (e.g. Matroska), timestamps are automatically written to the file and used to vary the playback speed
- If you choose a container format that does not support timestamps (e.g. AVI), you must duplicate other frames to fill in the gaps by adding the
videorate
andaudiorate
plugins to the end of the relevant pipelines
Getting GStreamer
GStreamer, its most common plugins and tools are available through your distribution's package manager. Most Linux distributions include both the legacy 0.10 and modern 1.0 release series - each has bugs that stop them from working on some hardware, and this page focuses mostly on the modern 1.0 series. Converting between 0.10 and 1.0 is mostly just search-and-replace work (e.g. changing instances of av
to ff
because of the switch from ffmpeg
to libavcodec
). See the porting guide for more.
Other plugins are also available, such as GEntrans
(used in some examples below). Google might help you find packages for your distribution, otherwise you'll need to download and compile them yourself.
Using GStreamer with gst-launch-1.0
gst-launch
is the standard command-line interface to GStreamer. Here's the simplest pipline you can build:
gst-launch-1.0 fakesrc ! fakesink
This connects a single (fake) source to a single (fake) sink using the 1.0 series of GStreamer:
GStreamer can build all kinds of pipelines, but you probably want to build one that looks something like this:
To get a list of elements that can go in a GStreamer pipeline, do:
gst-inspect-1.0 | less
Pass an element name to gst-inspect-1.0
for detailed information. For example:
gst-inspect-1.0 fakesrc gst-inspect-1.0 fakesink
The images above are based on graphs created by GStreamer itself. Install Graphviz to build graphs of your pipelines. For faster viewing of those graphs, you may install xdot from [1]:
mkdir gst-visualisations GST_DEBUG_DUMP_DOT_DIR=gst-visualisations gst-launch-1.0 fakesrc ! fakesink xdot gst-visualisations/*-gst-launch.*_READY.dot
You may also compiles those graph to PNG, SVG or other supported formats:
dot -Tpng gst-visualisations/*-gst-launch.*_READY.dot > my-pipeline.png
To get graphs of the example pipelines below, prepend GST_DEBUG_DUMP_DOT_DIR=gst-visualisations
to the gst-launch-1.0
command. Run this command to generate a graph of GStreamer's most interesting stage:
xdot gst-visualisations/*-gst-launch.PLAYING_READY.dot
Remember to empty the gst-visualisations
directory between runs.
Using GStreamer with entrans
gst-launch-1.0
is the main command-line interface to GStreamer, available by default. But entrans
is a bit smarter:
- it provides partly-automated composition of GStreamer pipelines
- it allows you to cut streams, for example to capture for a predefined duration. That ensures headers are written correctly, which is not always the case if you close
gst-launch-1.0
by pressing Ctrl+C. To use this feature one has to insert a dam element after the first queue of each part of the pipeline
Building pipelines
You will probably need to build your own GStreamer pipeline for your particular use case. This section contains examples to give you the basic idea.
Note: for consistency and ease of copy/pasting, all filenames in this section are of the form test-$( date --iso-8601=seconds )
- your shell should automatically convert this to e.g. test-2010-11-12T13:14:15+1600.avi
Record raw video only
A simple pipeline that initialises one video source, sets the video format, muxes it into a file format, then saves it to a file:
gst-launch-1.0 \ v4l2src device=$VIDEO_DEVICE \ ! $VIDEO_CAPABILITIES \ ! avimux \ ! filesink location=test-$( date --iso-8601=seconds ).avi
This will create an AVI file with raw video and no audio. It should play in most software, but the file will be huge.
Record raw audio only
A simple pipeline that initialises one audio source, sets the audio format, muxes it into a file format, then saves it to a file:
gst-launch-1.0 \ alsasrc device=$AUDIO_DEVICE \ ! $AUDIO_CAPABILITIES \ ! avimux \ ! filesink location=test-$( date --iso-8601=seconds ).avi
This will create an AVI file with raw audio and no video.
Record video and audio
gst-launch-1.0 \ v4l2src device=$VIDEO_DEVICE \ ! $VIDEO_CAPABILITIES \ ! mux. \ alsasrc device=$AUDIO_DEVICE \ ! $AUDIO_CAPABILITIES \ ! mux. \ avimux name=mux \ ! filesink location=test-$( date --iso-8601=seconds ).avi
Instead of a straightforward pipe with a single source leading into a muxer, this pipe has three parts:
- a video source leading to a named element (
! name.
with a full stop means "pipe to the name element") - an audio source leading to the same element
- a named muxer element leading to a file sink
Muxers combine data from many inputs into a single output, allowing you to build quite flexible pipes.
Create multiple sinks
The tee
element splits a single source into multiple outputs:
gst-launch-1.0 \ v4l2src device=$VIDEO_DEVICE \ ! $VIDEO_CAPABILITIES \ ! avimux \ ! tee name=network \ ! filesink location=test-$( date --iso-8601=seconds ).avi \ tcpclientsink host=127.0.0.1 port=5678
This sends your stream to a file (filesink
) and out over the network (tcpclientsink
). To make this work, you'll need another program listening on the specified port (e.g. nc -l 127.0.0.1 -p 5678
).
Encode audio and video
As well as piping streams around, GStreamer can manipulate their contents. The most common manipulation is to encode a stream:
gst-launch-1.0 \ v4l2src device=$VIDEO_DEVICE \ ! $VIDEO_CAPABILITIES \ ! videoconvert \ ! theoraenc \ ! queue \ ! mux. \ alsasrc device=$AUDIO_DEVICE \ ! $AUDIO_CAPABILITIES \ ! audioconvert \ ! vorbisenc \ ! mux. \ oggmux name=mux \ ! filesink location=test-$( date --iso-8601=seconds ).ogg
The theoraenc
and vorbisenc
elements encode the video and audio using Ogg Theora and Ogg Vorbis encoders. The pipes are then muxed together into an Ogg container before being saved.
Add buffers
Different elements work at different speeds. For example, a CPU-intensive encoder might fall behind when another process uses too much processor time, or a duplicate frame detector might hold frames back while it examines them. This can cause streams to fall out of sync, or frames to be dropped altogether. You can add queues to smooth these problems out:
gst-launch-1.0 -q -e \ v4l2src device=$VIDEO_DEVICE \ ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 \ ! $VIDEO_CAPABILITIES \ ! videoconvert \ ! x264enc interlaced=true pass=quant quantizer=0 speed-preset=ultrafast byte-stream=true \ ! progressreport update-freq=1 \ ! mux. \ alsasrc device=$AUDIO_DEVICE \ ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 \ ! $AUDIO_CAPABILITIES \ ! audioconvert \ ! flacenc \ ! mux. \ matroskamux name=mux min-index-interval=1000000000 \ ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 \ ! filesink location=test-$( date --iso-8601=seconds ).mkv
This creates a file using FLAC audio and x264 video in lossless mode, muxed into in a Matroska container. Because we used speed-preset=ultrafast
, the buffers should just smooth out the flow of frames through the pipelines. Even though the buffers are set to the maximum possible size, speed-preset=veryslow
would eventually fill the video buffer and start dropping frames.
Some other things to note about this pipeline:
- FFmpeg's H.264 page includes a useful discussion of speed presets (both programs use the same underlying library)
quantizer=0
sets the video codec to lossless mode (~30GB/hour). Anything up toquantizer=18
should not lose information visible to the human eye, and will produce much smaller files (~10GB/hour)min-index-interval=1000000000
improves seek times by telling the Matroska muxer to create one cue data entry per second of playback. Cue data is a few kilobytes per hour, added to the end of the file when encoding completes. If you try to watch your Matroska video while it's being recorded, it will take a long time to skip forward/back because the cue data hasn't been written yet
Common caputuring issues and their solutions
Reducing Jerkiness
If motion that should appear smooth instead stops and starts, try the following:
Check for muxer issues. Some muxers need big chunks of data, which can cause one stream to pause while it waits for the other to fill up. Change your pipeline to pipe your audio and video directly to their own filesink
s - if the separate files don't judder, the muxer is the problem.
- If the muxer is at fault, add ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 immediately before each stream goes to the muxer
- queues have hard-coded maximum sizes - you can chain queues together if you need more buffering than one buffer can hold
Check your CPU load. When GStreamer uses 100% CPU, it may need to drop frames to keep up.
- If frames are dropped occasionally when CPU usage spikes to 100%, add a (larger) buffer to help smooth things out.
- this can be a source's internal buffer (e.g. alsasrc buffer-time=2000000), or it can be an extra buffering step in your pipeline (! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0)
- If frames are dropped when other processes have high CPU load, consider using nice to make sure encoding gets CPU priority
- If frames are dropped regularly, use a different codec, change the parameters, lower the resolution, or otherwise choose a less resource-intensive solution
As a general rule, you should try increasing buffers first - if it doesn't work, it will just increase the pipeline's latency a bit. Be careful with nice
, as it can slow down or even halt your computer.
Check for incorrect timestamps. If your video driver works by filling up an internal buffer then passing a cluster of frames without timestamps, GStreamer will think these should all have (nearly) the same timestamp. Make sure you have a videorate
element in your pipeline, then add silent=false to it. If it reports many framedrops and framecopies even when the CPU load is low, the driver is probably at fault.
videorate
on its own will actually make this problem worse by picking one frame and replacing all the others with it. Instead installentrans
and add its stamp element between v4l2src and queue (e.g. v4l2src do-timestamp=true ! stamp sync-margin=2 sync-interval=5 ! videorate ! queue)- stamp intelligently guesses timestamps if drivers don't support timestamping. Its sync- options drop or copy frames to get a nearly-constant framerate. Using
videorate
as well does no harm and can solve some remaining problems
- stamp intelligently guesses timestamps if drivers don't support timestamping. Its sync- options drop or copy frames to get a nearly-constant framerate. Using
Avoiding pitfalls with video noise
If your video contains periods of video noise (snow), you may need to deal with some extra issues:
- Most devices send an EndOfStream signal if the input signal quality drops too low, causing GStreamer to finish capturing. To prevent the device from sending EOS, set num-buffers=-1 on the v4l2src element.
- The stamp plugin gets confused by periods of snow, causing it to generate faulty timestamps and framedropping. stamp will recover normal behaviour when the break is over, but will probably leave the buffer full of weirdly-stamped frames. stamp only drops one weirdly-stamped frame each sync-interval, so it can take several minutes until everything works fine again. To solve this problem, set leaky=2 on each queue element to allow dropping old frames
- Periods of noise (snow, bad signal etc.) are hard to encode. Variable bitrate encoders will often drive up the bitrate during the noise then down afterwards to maintain the average bitrate. To minimise the issues, specify a minimum and maximum bitrate in your encoder
- Snow at the start of a recording is just plain ugly. To get black input instead from a VCR, use the remote control to change the input source before you start recording
Investigating bugs in GStreamer
GStreamer comes with a extensive tracing system that let you figure-out the problems. Yet, you often need to understand the internals of GStreamer to be able to read those traces. You should read this documentation page for the basic of how the tracing system works. When something goes wrong you should:
- try and see if there is a useful error message by enabling the ERROR debug level,
GST_DEBUG=2 gst-launch-1.0
- try similar pipelines - reducing to its most minimal form, and add more elements until you can reproduce the issue.
- as you are most likely having issue with V4L2 element, you may enable full v4l2 traces using
GST_DEBUG="v4l2*:7,2" gst-launch-1.0
. - find an error message that looks relevant, search the Internet for information about it
- try more variations based on what you learnt, until you eventually find something that works
- ask on Freenode #gstreamer or through GStreamer Mailing List
- if you think you found a bug, you should report it through Gnome Bugzilla
Sample pipelines
record from a bad analog signal to MJPEG video and RAW mono audio
gst-launch-1.0 \ v4l2src device=$VIDEO_DEVICE do-timestamp=true \ ! $VIDEO_CAPABILITIES \ ! videorate \ ! $VIDEO_CAPABILITIES \ ! videoconvert \ ! $VIDEO_CAPABILITIES \ ! jpegenc \ ! queue \ ! mux. \ alsasrc device=$AUDIO_DEVICE \ ! $AUDIO_CAPABILITIES \ ! audiorate \ ! audioresample \ ! $AUDIO_CAPABILITIES, rate=44100 \ ! audioconvert \ ! $AUDIO_CAPABILITIES, rate=44100, channels=1 \ ! queue \ ! mux. \ avimux name=mux ! filesink location=test-$( date --iso-8601=seconds ).avi
The chip that captures audio and video might not deliver the exact framerates specified, which the AVI format can't handle. The audiorate
and videorate
elements remove or duplicate frames to maintain a constant rate.
View pictures from a webcam (GStreamer 0.10)
gst-launch-0.10 \ v4l2src do-timestamp=true device=$VIDEO_DEVICE \ ! video/x-raw-yuv,format=\(fourcc\)UYVY,width=320,height=240 \ ! ffmpegcolorspace \ ! autovideosink
In GStreamer 0.10, videoconvert was called ffmpegcolorspace.
Entrans: Record to DVD-compliant MPEG2 (GStreamer 0.10)
entrans -s cut-time -c 0-180 -v -x '.*caps' --dam -- --raw \ v4l2src queue-size=16 do-timestamp=true device=$VIDEO_DEVICE norm=PAL-BG num-buffers=-1 \ ! stamp silent=false progress=0 sync-margin=2 sync-interval=5 \ ! queue silent=false leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 \ ! dam \ ! cogcolorspace \ ! videorate silent=false \ ! 'video/x-raw-yuv,width=720,height=576,framerate=25/1,interlaced=true,aspect-ratio=4/3' \ ! queue silent=false leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 \ ! ffenc_mpeg2video rc-buffer-size=1500000 rc-max-rate=7000000 rc-min-rate=3500000 bitrate=4000000 max-key-interval=15 pass=pass1 \ ! queue silent=false leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 \ ! mux. \ pulsesrc buffer-time=2000000 do-timestamp=true \ ! queue silent=false leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 \ ! dam \ ! audioconvert \ ! audiorate silent=false \ ! audio/x-raw-int,rate=48000,channels=2,depth=16 \ ! queue silent=false max-size-buffers=0 max-size-time=0 max-size-bytes=0 \ ! ffenc_mp2 bitrate=192000 \ ! queue silent=false leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 \ ! mux. \ ffmux_mpeg name=mux \ ! filesink location=test-$( date --iso-8601=seconds ).mpg
This captures 3 minutes (180 seconds, see first line of the command) to test-$( date --iso-8601=seconds ).mpg and even works for bad input signals.
- I wasn't able to figure out how to produce a mpeg with ac3-sound as neither ffmux_mpeg nor mpegpsmux support ac3 streams at the moment. mplex does but I wasn't able to get it working as one needs very big buffers to prevent the pipeline from stalling and at least my GStreamer build didn't allow for such big buffers.
- The limited buffer size on my system is again the reason why I had to add a third queue element to the middle of the audio as well as of the video part of the pipeline to prevent jerking.
- In many HOWTOs you find ffmpegcolorspace instead of cogcolorspace. You can even use this but cogcolorspace is much faster.
- It seems to be important that the video/x-raw-yuv,width=720,height=576,framerate=25/1,interlaced=true,aspect-ratio=4/3-statement is after videorate as videorate seems to drop the aspect-ratio-metadata otherwise resulting in files with aspect-ratio 1 in theis headers. Those files are probably played back warped and programs like dvdauthor complain.
Bash script to record video tapes with entrans
For most use cases, you'll want to wrap GStreamer in a larger shell script. This script protects against several common mistakes during encoding.
See also the V4L capturing script for a a wrapper that represents a whole workflow.
#!/bin/bash targetdirectory="~/videos" # Test ob doppelt geöffnet if [[ -e "~/.lock_shutdown.digitalisieren" ]]; then echo "" echo "" echo "Capturing already running. It is impossible to capture to tapes simultaneously. Hit a key to abort." read -n 1 exit fi # trap keyboard interrupt (control-c) trap control_c 0 SIGHUP SIGINT SIGQUIT SIGABRT SIGKILL SIGALRM SIGSEGV SIGTERM control_c() # run if user hits control-c { cleanup exit $? } cleanup() { rm ~/.lock_shutdown.digitalisieren return $? } touch "~/.lock_shutdown.digitalisieren" echo "" echo "" echo "Please enter the length of the tape in minutes and press ENTER. (Press Ctrl+C to abort.)" echo "" while read -e laenge; do if [[ $laenge == [0-9]* ]]; then break 2 else echo "" echo "" echo "That's not a number." echo "Please enter the length of the tape in minutes and press ENTER. (Press Ctrl+C to abort.)" echo "" fi done let laenge=laenge+10 # Sicherheitsaufschlag, falls Band doch länger let laenge=laenge*60 echo "" echo "" echo "Please type in the description of the tape." echo "Don't forget to rewind the tape?" echo "Hit ENTER to start capturing. Press Ctrl+C to abort." echo "" read -e name; name=${name//\//_} name=${name//\"/_} name=${name//:/_} # Falls Name schon vorhanden if [[ -e "$targetdirectory/$name.mpg" ]]; then nummer=0 while [[ -e "$targetdirectory/$name.$nummer.mpg" ]]; do let nummer=nummer+1 done name=$name.$nummer fi # Audioeinstellungen setzen: unmuten, Regler amixer -D pulse cset name='Capture Switch' 1 >& /dev/null # Aufnahme-Kanal einschalten amixer -D pulse cset name='Capture Volume' 20724 >& /dev/null # Aufnahme-Pegel einstellen # Videoinput auswählen und Karte einstellen v4l2-ctl --set-input 3 >& /dev/null v4l2-ctl -c saturation=80 >& /dev/null v4l2-ctl -c brightness=130 >& /dev/null let ende=$(date +%s)+laenge echo "" echo "Working" echo "Capturing will be finished at "$(date -d @$ende +%H.%M)"." echo "" echo "Press Ctrl+C to finish capturing now." nice -n -10 entrans -s cut-time -c 0-$laenge -m --dam -- --raw \ v4l2src queue-size=16 do-timestamp=true device=$VIDEO_DEVICE norm=PAL-BG num-buffers=-1 ! stamp sync-margin=2 sync-interval=5 silent=false progress=0 ! \ queue leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! dam ! \ cogcolorspace ! videorate ! \ 'video/x-raw-yuv,width=720,height=576,framerate=25/1,interlaced=true,aspect-ratio=4/3' ! \ queue leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! \ ffenc_mpeg2video rc-buffer-size=1500000 rc-max-rate=7000000 rc-min-rate=3500000 bitrate=4000000 max-key-interval=15 pass=pass1 ! \ queue leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! mux. \ pulsesrc buffer-time=2000000 do-timestamp=true ! \ queue leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! dam ! \ audioconvert ! audiorate ! \ audio/x-raw-int,rate=48000,channels=2,depth=16 ! \ queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! \ ffenc_mp2 bitrate=192000 ! \ queue leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! mux. \ ffmux_mpeg name=mux ! filesink location=\"$targetdirectory/$name.mpg\" >& /dev/null echo "Finished Capturing" rm ~/.lock_shutdown.digitalisieren
The script uses a command line similar to this to produce a DVD compliant MPEG2 file.
- The script aborts if another instance is already running.
- If not it asks for the length of the tape and its description
- It records to description.mpg or if this file already exists to description.0.mpg and so on for the given time plus 10 minutes. The target-directory has to be specified in the beginning of the script.
- As setting of the inputs and settings of the capture device is only partly possible via GStreamer other tools are used.
- Adjust the settings to match your input sources, the recording volume, capturing saturation and so on.
Further documentation resources
- V4L Capturing
- Gstreamer project
- FAQ
- Documentation
- man gst-launch
- entrans command line tool documentation
- gst-inspect plugin-name