GStreamer: Difference between revisions
(Another refactoring and expansion, part 1) |
|||
Line 1: | Line 1: | ||
GStreamer is a toolkit for building audio- and video-processing pipelines. A pipeline might stream video from a file to a network, or add an echo to a recording, or (most interesting to us) capture the output of a Video4Linux device. Gstreamer is most often used to power graphical applications such as [https://wiki.gnome.org/Apps/Videos Totem], but this page will explain how to build an encoder using its command-line interface. |
|||
GStreamer is a multimedia processing library which front end applications can leverage in order to provide a wide variety of functions such as audio and video playback, streaming, non-linear video editing and V4L2 capture support. It is also available for many other platforms for example for windows. |
|||
It is mainly used as API working in the background of applications as Totem. But with the comman-line-tools gst-launch and entrans it is also possible to directly create and use gstreamer-piplines on the commandline. |
|||
==Getting GStreamer== |
== Getting Started with GStreamer == |
||
GStreamer, the most common GStreamer-plugins and the most important tools like ''gst-launch'' are available through your disttribution's package management. But entrans and some of the plugins used in the examples below are not. You can find their sources bundled by the [http://sourceforge.net/projects/gentrans/files/gst-entrans/ GEntrans project] at sourceforge. Google may help you to find precompiled packages for your distro. |
|||
There are the old GStreamer 0.10 and the new GStreamer 1.0. They are incompatible but can be installed parallel. Everything said below can be done with both GStreamer 0.10 and GStreamer 1.0. You simply have to use the appropriate commands, e.g. gst-launch-0.10 vs. gst-launch-1.0 (gst-launch is linked to one of them). Note the ''ff(mpeg)''-plugins have been renamed to ''av''. For example ''ffenc_mp2'' of GStreamer 0.10 is called ''avenc_mp2'' in GStreamer 1.0. |
|||
GStreamer, its most common plugins tools like are available through your distribution's package manager. But <code>entrans</code> and some of the plugins used in the examples below are not. You can find their sources bundled by the [http://sourceforge.net/projects/gentrans/files/gst-entrans/ GEntrans project] at sourceforge. Google may help you to find precompiled packages for your distro. |
|||
== Entrans versus gst-launch== |
|||
Two series of GStreamer are available - ''0.10'' and ''1.0''. Most Linux distributions include both, but this page discusses the older ''0.10'' series because I was unable to get the ''1.0'' series to work with my TV card. Converting the commands below to work with ''1.0'' is mostly just search-and-replace work (e.g. changing instances of <code>ff</code> to <code>av</code> because of the switch from <code>ffmpeg</code> to <code>libavcodec</code>). See [http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/chapter-porting-1.0.html the porting guide] for more. |
|||
gst-launch is better documented and part of all distributions. But entrans is a bit smarter for the following reasons: |
|||
* It provides partly automatically composing of GStreamer pipelines |
|||
=== Using GStreamer with gst-launch === |
|||
* It allows cutting of the streams; the most simple application of this feature is to capture for a distinct time. That allows the muxers to properly close the captured files writing correct headers which is not always given if you finish capturing with gst-launch by simply typing Ctrl+C. To use this feature one has to insert a ''dam'' element after the first ''queue'' of each part of the pipeline. |
|||
<code>gst-launch</code> is the standard command-line interface to GStreamer. Here's the simplest pipline you can build: |
|||
gst-launch-0.10 fakesrc ! fakesink |
|||
This connects a single (fake) source to a single (fake) sink using the 0.10 series of GStreamer: |
|||
[[File:GStreamer-simple-pipeline.png]] |
|||
To learn more about the source and sink elements, do: |
|||
gst-inspect-0.10 fakesrc |
|||
gst-inspect-0.10 sink |
|||
If you have installed [http://www.graphviz.org Graphviz], you can build a graph like the above yourself: |
|||
mkdir gst-visualisations |
|||
GST_DEBUG_DUMP_DOT_DIR=gst-visualisations gst-launch-0.10 fakesrc ! fakesink |
|||
dot -Tpng gst-visualisations/*-gst-launch.PLAYING_READY.dot > my-pipeline.png |
|||
To get graphs of the example pipelines below, prepend <code>GST_DEBUG_DUMP_DOT_DIR=gst-visualisations </code> to the <code>gst-launch</code> command. Run this command to generate a PNG version of GStreamer's most interesting stage: |
|||
dot -Tpng gst-visualisations/*-gst-launch.PLAYING_READY.dot > my-pipeline.png |
|||
Remember to empty the <code>gst-visualisations</code> directory between runs. |
|||
=== Using GStreamer with entrans === |
|||
<code>gst-launch</code> is the main command-line interface to GStreamer, available by default. But <code>entrans</code> is a bit smarter: |
|||
* it provides partly-automatically composing of GStreamer pipelines |
|||
* it allows cutting of streams; e.g. to capture for a predefined duration. That ensures headers are written correctly, which is not always the case if you close <code>gst-launch</code> by pressing Ctrl+C. To use this feature one has to insert a ''dam'' element after the first ''queue'' of each part of the pipeline |
|||
==Using GStreamer for V4L TV capture== |
==Using GStreamer for V4L TV capture== |
||
No two use cases for encoding are quite alike. Is your processor fast enough to encode high quality video? Do you want to play your video in DVD players, or is it enough that it works in your version of [http://www.videolan.org/vlc/index.en_GB.html VLC]? Which obscure quirks does your system have? |
|||
===Why preferring GStreamer?=== |
|||
Despite the fact that GStreamer is much more flexible than other solutions most other tools especially those which are based on the ffmpeg library (e.g. mencoder) have by design difficulties in managing A/V-synchronisation: The common tools process the audio and the video stream independently frame by frame as frames come in. Afterwards they are muxed relying on their specified framerates. E.g. if you have a 25fps video stream and a 48,000kHz audio stream it simply takes 1 video frame, 1920000 audio frames, 1 video frame and so on. This probably leeds to sync issues for at least three reasons: |
|||
* If frames get dropped audio and video shift against each other. For example if your CPU is not fast enough and sometimes drops a video frame after 25 dropped audio frames the video is one second in advance. (Using mencoder this can be fixed by usindg the ''-harddup'' option in most situations. It causes mencoder to wath if video frames are dropped and to copy other frames to fill the gaps before muxing.) |
|||
* The audio and video devices need different time to start the streams. For example after you have started capturing the first audioframe is grabbed 0.01 seconds thereafter while the first videoframe is grabbed 0.7 seconds thereafter. This leads to constant time-offset between audio and video. (Using mencoder this can be fixed by using the ''-delay'' option. But you have to find out the appropriate delay by try and error. It won't be very precise. Another problem is that things very often change if you update your software as for example the buffers and timeouts of the drivers and the audioframework change. So you have to do it again and again after each update.) |
|||
* If the clocks of your audio source and your video source are not accurate enough (very common on low-cost home-user-products) and your webcam sends 25.02 fps instead of 25 fps and your audio source delivers 47,999kHz instead of 48,000kHz audio and video are going to slowly shift by time. The result is that after an hour or so of capturing audio and video differ by a second or so. But that's not only an issue of inaccurate clocks. Analogue TV is not completely analogue but the frames are discrete. Therefore video-capturing devices can't rely on their internal clocks but have to synchronize to the incoming signal. That's no problem if capturing live TV, but if you capture the signal of a VHS recorder. These recorders can't instantly change the speed of the tape but need to slightly adjust the fps for accurate tracking, especially if the quality of the tape is bad. (This issue can't be addressed using mencoder.) |
|||
To be more robust and accurate GStreamer attaches a timestamp to each incoming frame in the very beginning using the PC's clock. In the end muxing is done according to these timestamps. |
|||
Many container-formats for example Matroska support timestamps. Therefore the GStreamer timestamps are written to the file when cepturing to those formats. Others like avi can't handle timestamps. If you write streams with varying framerate (for example due to framedrops) to those files they are played back with varying speed (but nevertheless correct sync) as the timestamps get lost. To prevent this you have to use the videorate-plugin in the end of the pipeline's video part and the ''audiorate''-plugin in the end of the pipeline's audio part which produce streams with constant framerates by copying and dropping frames according to the timestamps. |
|||
To further improve the A/V-sync one should use the ''do-timestamp=true'' option for each capturing source. It gets the timestamps from the drivers instead of letting GStreamer do the stamping. This is even more accurate as effects of the hardware-buffering are also taken into account. Even if the source doesn't support this (e.g. many v4l2-drivers doesn't support timestamping) there is no harm in doing this as GStreamer falls back to the normal behaviour in those cases. |
|||
'''Use GStreamer if''' you want the best video quality possible with your hardware, and don't mind spending a weekend browsing the Internet for information. |
|||
===Common caputuring issues and their solutions=== |
|||
====Jerking==== |
|||
* If your hardware is not fast enough you probably get framedrops. An indication is high CPU load. If there are only few framedrops you can try to address the issue by enlarging the buffers. (If that doesn't help you have to change the codecs or its parameters and / or lower the resolution.) You have to enlarge the queues as well as the buffers of your sources, e.g. ''queue max-size-buffers=0 max-size-time=0 max-size-bytes=0'', ''v4l2src queue-size=16'', ''pulsesrc buffer-time=2000000'' Even if not necessary increasing the buffer-sizes doesn't do any harm. Big buffers only increase the pipeline's latency which doesn't matter for capturing purposes. |
|||
* Some muxers need big chunks of data at once. Therefore situations happen in which the muxer waits for example for more audio data and doesn't process any videodata. If this takes longer than filling up all the video buffers of the pipeline video frames get dropped. To prevent the resulting jerking one should enlarge all the buffers. ''queue max-size-buffers=0 max-size-time=0 max-size-bytes=0'' should disable any size-restrictions of the queue. But in fact there seems to be a hard-coded restriction in some cases. If needed you can work around this issue by adding additional ''queue'' elements to the pipeline. You can try out if to small buffers are the reason for your jerking by changing the pipeline to write the audio and the video stream into different files instead of muxing them together. If these files don't judder to small buffers are your problem. |
|||
* Many v4l2-drivers don't support timestamping. So even if ''do-timestamping=true'' is given GStreamer has to do the timestamping when it gets the frames from the driver. Most drivers tend to fill up their internal buffers and pass many frame as a cluster. This makes GStreamer giving all the frames of such a cluster (nearly) the same timestamp. As those inaccuracies are typically very small this doesn't disturb A/V sync but leads to Jerking as the frames are played back in clusters if videorate isn't used. If videorate is used Jerking is even worse as videorate relying on the inaccurate timestamps drops all but one frame of each cluster and copies this frame to fill the gaps. If videorate is used with the ''silent=false'' option it reports many framedrops and framecopies even if the CPU load is low. To solve this problem use the ''stamp'' plugin between ''v4l2src'' and ''queue''. For example ''v4l2src do-timestamp=true ! stamp sync-margin=2 sync-interval=5 ! queue''. The stamp plugin inspects the buffers and uses a smart algorithm to correct the timestamps if the drivers doesn't support timestamping. Using the sync options ''stamp'' can additionally drop or copy frames to get a close to constant framerate. In most cases this doesn't completely replace videorate. It's safe to use videorate in addition. |
|||
'''Avoid GStreamer if''' you just want something quick-and-dirty, or can't stand poorly documented programs. |
|||
====Capturing of disturbed video signals==== |
|||
* Most video capturing devices send EndOfStream singnals if the quality of the input signal is too bad or if there is a period of snow. This aborts the capturing process. To prevent the device from sending EOS set ''num-buffers=-1'' on the ''v4l2src'' element. |
|||
* The ''stamp'' plugin gets confused by periods of snow and does faulty timestamps and framedropping. This effect itself doesn't matter as stamp recovers normal behaviour when the brake is over. But chances are good that the buffers are full of old weird stamped frames. ''stamp'' then drops only one of them each sync-intervall with the result that it can take a quite long time (minutes) until everything works fine again. To solve this problem set ''leaky=2'' on each ''queue'' element to allow dropping of old frames which aren't needed any longer. |
|||
* If using variable bitrate for encoding the bitrate increases very much during periods of bad signal quality or snow. Afterwards the codec uses a very low bitrate to reach the desired average bitrate resulting in poor quality. To prevent this and stay in the limits that are allowed for the purpose you aim at don't forget to specify a maximum and a minimum for the variable bitrate. |
|||
=== Why prefer GStreamer? === |
|||
===Sample commands=== |
|||
GStreamer is better than most tools at synchronising audio with video in disturbed sources such as VHS tapes. If you specify your input is (say) 25 frames per second video and 48,000kHz audio, most tools will synchronise audio and video simply by writing 1 video frame, 1,920,000 audio frames, 1 video frame and so on. This calculation can lead to errors for some sources: |
|||
====Record to ogg theora==== |
|||
* if the audio and video devices take different amounts of time to initialise. For example, the first audio frame might be delivered to GStreamer 0.01 seconds after it was requested, but the first video frame might not be delivered until 0.7 seconds after it was requested, causing all video to be 0.6 seconds behind the audio |
|||
gst-launch-0.10 oggmux name=mux ! filesink location=test0.ogg v4l2src device=/dev/video2 ! \ |
|||
** <code>mencoder</code>'s ''-delay'' option solves this by delaying the audio |
|||
video/x-raw-yuv,width=640,height=480,framerate=\(fraction\)30000/1001 ! ffmpegcolorspace ! \ |
|||
* if frames are dropped, audio and video shift relative each other. For example if your CPU is not fast enough and sometimes drops a video frame, after 25 dropped audio frames the video will be one second ahead of the audio |
|||
theoraenc ! queue ! mux. alsasrc device=hw:2,0 ! audio/x-raw-int,channels=2,rate=32000,depth=16 ! \ |
|||
** <code>mencoder</code>'s ''-harddup'' option solves this by duplicating other frames to fill in the gaps |
|||
audioconvert ! vorbisenc ! mux. |
|||
* if your hardware has a slightly inaccurate clock (common in low-cost home-user products). For example, your webcam might deliver 25.01 video frames per second and your audio source might deliver 47,999kHz, causing your audio and video drifting apart by a second or so per hour |
|||
** video tapes are especially problematic here - if you've ever seen a VCR struggle with a low quality recording (e.g. the few seconds between two recordings on a tape), you've seen them adjusting the tape speed to accurately track the source. Frame counts can vary enough during these periods to instantly desynchronise audio and video |
|||
** <code>mencoder</code> has no solution for this problem |
|||
GStreamer solves these problems by attaching a timestamp to each incoming frame based on the time GStreamer receives the frame. It can then mux the sources back together accurately using these timestamps, either by using a format that supports variable framerates or by duplicating frames to fill in the blanks: |
|||
The files will play in mplayer, using the codec Theora. Note the required workaround to get sound on a saa7134 card, which is set at 32000Hz (cf. [http://pecisk.blogspot.com/2006/04/alsa-worries-countinues.html bug]). However, I was still unable to get sound output, though mplayer claimed there was sound -- the video is good quality: |
|||
# If you choose a container format that supports timestamps (e.g. Matroska), timestamps are automatically written to the file and used to vary the playback speed |
|||
# If you choose a container format that does not support timestamps (e.g. AVI), you must duplicate other frames to fill in the gaps by adding the <code>videorate</code> and <code>audiorate</code> plugins to the end of the relevant pipelines |
|||
To get accurate timestamps, specify the <code>do-timestamp=true</code> option for all your sources. This will ensure accurate timestamps are retrieved from the driver where possible. Sadly, many v4l2 drivers don't support timestamps - GStreamer will add timestamps for these drivers to stop audio and video drifting apart, but you will need to fix the constant time-offset yourself (discussed below). |
|||
== Common caputuring issues and their solutions == |
|||
=== Determining your video source === |
|||
See all your video sources by doing: |
|||
ls /dev/video* |
|||
One of these is the card you want. Most people only have one, or can figure it out by disconnecting devices and rerunning the above command. Otherwise, check the capabilites of each device: |
|||
for VIDEO_DEVICE in /dev/video* ; do echo ; echo ; echo $VIDEO_DEVICE ; echo ; v4l2-ctl --device=$VIDEO_DEVICE --list-inputs ; done |
|||
Usually you will see e.g. a webcam with a single input and a TV card with multiple inputs. If you're still not sure which one is yours, try each one in turn: |
|||
v4l2-ctl --device=<device> --set-input=<whichever-input-you-want-to-use> |
|||
gst-launch-0.10 v4l2src do-timestamp=true device=<device> ! autovideosink |
|||
(if your source is a VCR, remember to play a video so you know the right one when you see it) |
|||
If you like, you can store your device in an environment variable: |
|||
VIDEO_DEVICE=<device> |
|||
All further examples will use <CODE>$VIDEO_DEVICE</CODE> in place of an actual video device |
|||
=== Determining your audio source === |
|||
See all of our audio sources by doing: |
|||
arecord -l |
|||
Again, it should be fairly obvious which of these is the right one. Get the device names by doing: |
|||
arecord -L | grep ^hw: |
|||
If you're not sure which one you want, try each in turn: |
|||
gst-launch-0.10 alsasrc do-timestamp=true device=hw:<device> ! autoaudiosink |
|||
Again, you should hear your tape playing when you get the right one. Note: always use an ALSA ''hw'' device, as they are closest to the hardware. Pulse audio devices and ALSA's ''plughw'' devices add extra layers that, while more convenient for most uses, only cause headaches for us. |
|||
Optionally set your device in an environment variable: |
|||
AUDIO_DEVICE=<device> |
|||
All further examples will use <CODE>$AUDIO_DEVICE</CODE> in place of an actual audio device |
|||
=== Reducing Jerkiness === |
|||
If motion that should appear smooth instead stops and starts, try the following: |
|||
'''Check for muxer issues'''. Some muxers need big chunks of data, which can cause one stream to pause while it waits for the other to fill up. Change your pipeline to pipe your audio and video directly to their own <code>filesink</code>s - if the separate files don't judder, the muxer is the problem. |
|||
* If the muxer is at fault, add ''! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0'' immediately before each stream goes to the muxer |
|||
** queues have hard-coded maximum sizes - you can chain queues together if you need more buffering than one buffer can hold |
|||
'''Check your CPU load'''. When GStreamer uses 100% CPU, it may need to drop frames to keep up. |
|||
* If frames are dropped occasionally when CPU usage spikes to 100%, add a (larger) buffer to help smooth things out. |
|||
** this can be a source's internal buffer (e.g. ''v4l2src queue-size=16'' or ''alsasrc buffer-time=2000000''), or it can be an extra buffering step in your pipeline (''! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0'') |
|||
* If frames are dropped when other processes have high CPU load, consider using [https://en.wikipedia.org/wiki/Nice_(Unix) nice] to make sure encoding gets CPU priority |
|||
* If frames are dropped regularly, use a different codec, change the parameters, lower the resolution, or otherwise choose a less resource-intensive solution |
|||
As a general rule, you should try increasing buffers first - if it doesn't work, it will just increase the pipeline's latency a bit. Be careful with <code>nice</code>, as it can slow down or even halt your computer. |
|||
'''Check for incorrect timestamps'''. If your video driver works by filling up an internal buffer then passing a cluster of frames without timestamps, GStreamer will think these should all have (nearly) the same timestamp. Make sure you have a <code>videorate</code> element in your pipeline, then add ''silent=false'' to it. If it reports many framedrops and framecopies even when the CPU load is low, the driver is probably at fault. |
|||
* <code>videorate</code> on its own will actually make this problem worse by picking one frame and replacing all the others with it. Instead install <code>entrans</code> and add its ''stamp'' element between ''v4l2src'' and ''queue'' (e.g. ''v4l2src do-timestamp=true ! stamp sync-margin=2 sync-interval=5 ! videorate ! queue'') |
|||
** ''stamp'' intelligently guesses timestamps if drivers don't support timestamping. Its ''sync-'' options drop or copy frames to get a nearly-constant framerate. Using <code>videorate</code> as well does no harm and can solve some remaining problems |
|||
=== Measuring your video framerate === |
|||
As mentioned above, some video cards produce slightly too many (or too few) frames per second. To check your system's actual frames per second, start your video source (e.g. a VCR or webcam) then run this command: |
|||
gst-launch-0.10 v4l2src ! fpsdisplaysink fps-update-interval=100000 |
|||
# Let it run for 100 seconds to get a large enough sample. It should print some statistics in the bottom of the window - write down the number of frames dropped |
|||
# Let it run for another 100 seconds, then write down the new number of frames dropped |
|||
# Calculate <code>(second number) - (first number) - 1</code> (e.g. 5007 - 2504 - 1 == 2502) |
|||
#* You need to subtract one because <code>fpsdisplaysink</code> drops one frame every time it displays the counter |
|||
# That number is exactly one hundred times your framerate, so you should tell GStreamer e.g. <code>framerate=2502/100</code> |
|||
=== Fixing a constant time-offset === |
|||
If your hardware doesn't support timestamps, your encoded audio might have a constant desynchronisation between the audio and the video. This offset is based on too many factors to isolate (e.g. a new driver version might increase or decrease the value), so fixing this is a manual process that probably needs to be done every time you encode a file. |
|||
'''Calculate your desired offset:''' |
|||
# Record a video using one of the techniques below |
|||
# Open the video in your favourite video player |
|||
# Adjust the A/V sync until it looks right to you - different players put this in different places, for example it's ''Tools > Track Synchronisation'' in VLC |
|||
# write down your desired time-offset |
|||
If possible, look for (or create) [https://en.wikipedia.org/wiki/Clapperboard clapperboard]-like events - moments where an obvious visual element occurred at the same moment as an obvious audio moment. A hand clapping or a cup being placed on a table are good examples. |
|||
Extract your audio: |
|||
gst-launch-0.10 \ |
|||
uridecodebin uri="file:///path/to/my.file" \ |
|||
! progressreport \ |
|||
! audioconvert \ |
|||
! audiorate \ |
|||
! wavenc \ |
|||
! filesink location="/path/to/my.file.wav" |
|||
If you have a clapperboard event, you might want to examine the extracted file in an audio editor like [http://audacityteam.org/ Audacity]. You should be able to see the exact time of the clap sound in the audio stream, watch the video to isolate the exact frame, and use that information to calculate the precise audio delay. |
|||
Use <code>sox</code> to prepend some silence: |
|||
sox -S -t wav <( sox -V1 -n -r <bitrate> -c <audio-channels> -t wav - trim 0.0 <delay-in-seconds> ) "/path/to/my.file.wav" "/path/to/my.file.flac" |
|||
Mix the new audio and the old video into a new file: |
|||
gst-launch-0.10 \ |
|||
uridecodebin uri="file:///path/to/my.file" \ |
|||
! video/your-video-settings \ |
|||
! mux. \ |
|||
uridecodebin uri="file:///path/to/my.file.flac" \ |
|||
! audioconvert \ |
|||
! audiorate \ |
|||
! your_preferred_audio_encoder \ |
|||
! mux. \ |
|||
avimux name=mux \ |
|||
! filesink location="/path/to/my.file.new" |
|||
Note: you can apply any <code>sox</code> filter this way, like normalising the volume or removing background noise. |
|||
==== A specific solution for measuring your time-offset ==== |
|||
Measuring your time-offset will probably be the most unique part of your recording solution. Here is one solution you could use when digitising old VHS tapes: |
|||
# Connect a camcorder to your VCR |
|||
# Tune the VCR so it shows the camcorder output when it's not playing |
|||
# Start your GStreamer pipeline |
|||
# Clap your hands in front of the camcorder so you can later measure A/V synchronisation |
|||
# Press play on the VCR |
|||
# When the video has finished recording, split the audio and video tracks as described above |
|||
# Examine the audio with [http://audacityteam.org/ Audacity] and identify the precise time of the clap sound |
|||
# Examine the video with [http://avidemux.sourceforge.net/ avidemux] and identify the frame of the clap image |
|||
You'll probably need to change every step of the above to match your situation, but hopefully it will provide some inspiration. |
|||
=== Avoiding pitfalls of disturbed video signals === |
|||
* Most video capturing devices send EndOfStream signals if the quality of the input signal is too bad or if there is a period of snow. This aborts the capturing process. To prevent the device from sending EOS set ''num-buffers=-1'' on the ''v4l2src'' element. |
|||
* The ''stamp'' plugin gets confused by periods of snow and does faulty timestamps and framedropping. This effect itself doesn't matter as ''stamp'' recovers normal behaviour when the brake is over. But chances are good that the buffers are full of old weird stamped frames. ''stamp'' then drops only one of them each sync-interval with the result that it can take several minutes until everything works fine again. To solve this problem set ''leaky=2'' on each ''queue'' element to allow dropping of old frames which aren't needed any longer. |
|||
* Periods of noise (snow, bad signal etc.) are hard to encode. Variable bitrate encoders will often drive up the bitrate during the noise then down afterwards to maintain the average bitrate. To minimise these issues, specify a minimum and maximum bitrate in your encoder |
|||
== Sample pipelines == |
|||
At some point, you will probably need to build your own GStreamer pipeline. Here are some examples to give you the basic idea: |
|||
=== Record raw video only === |
|||
A simple pipeline that initialises one video ''source'', sets the video format, ''muxes'' it into a file format, then saves it to a file: |
|||
gst-launch-0.10 \ |
|||
v4l2src do-timestamp=true device=$VIDEO_DEVICE \ |
|||
! video/x-raw-yuv,width=640,height=480 \ |
|||
! avimux |
|||
! filesink location=test0.avi |
|||
<code>tcprobe</code> says this video-only file uses the I420 codec and gives the framerate as correct NTSC: |
|||
$ tcprobe -i test1.avi |
|||
[tcprobe] RIFF data, AVI video |
|||
[avilib] V: 29.970 fps, codec=I420, frames=315, width=640, height=480 |
|||
[tcprobe] summary for test1.avi, (*) = not default, 0 = not detected |
|||
import frame size: -g 640x480 [720x576] (*) |
|||
frame rate: -f 29.970 [25.000] frc=4 (*) |
|||
no audio track: use "null" import module for audio |
|||
length: 315 frames, frame_time=33 msec, duration=0:00:10.510 |
|||
The files will play in mplayer, using the codec [raw] RAW Uncompressed Video. |
|||
=== Record to ogg theora === |
|||
Here is a more complex example that initialises two sources - one video and audio: |
|||
gst-launch-0.10 \ |
|||
v4l2src do-timestamp=true device=$VIDEO_DEVICE \ |
|||
! video/x-raw-yuv,width=640,height=480,framerate=\(fraction\)30000/1001 \ |
|||
! ffmpegcolorspace \ |
|||
! theoraenc \ |
|||
! queue \ |
|||
! mux. \ |
|||
alsasrc do-timestamp=true device=$AUDIO_DEVICE \ |
|||
! audio/x-raw-int,channels=2,rate=32000,depth=16 \ |
|||
! audioconvert \ |
|||
! vorbisenc \ |
|||
! mux. \ |
|||
oggmux name=mux \ |
|||
! filesink location=test0.ogg |
|||
Each source is encoded and piped into a <i>muxer</i> that builds an ogg-formatted data stream. The stream is then saved to <code>test0.ogg</code>. Note the required workaround to get sound on a saa7134 card, which is set at 32000Hz (cf. [http://pecisk.blogspot.com/2006/04/alsa-worries-countinues.html bug]). However, I was still unable to get sound output, though mplayer claimed there was sound -- the video is good quality: |
|||
VIDEO: [theo] 640x480 24bpp 29.970 fps 0.0 kbps ( 0.0 kbyte/s) |
VIDEO: [theo] 640x480 24bpp 29.970 fps 0.0 kbps ( 0.0 kbyte/s) |
||
Line 49: | Line 264: | ||
Selected audio codec: [ffvorbis] afm: ffmpeg (FFmpeg Vorbis decoder) |
Selected audio codec: [ffvorbis] afm: ffmpeg (FFmpeg Vorbis decoder) |
||
=== |
=== Record to mpeg4 === |
||
This is similar to the above, but generates an AVI file with streams encoded using AVI-compatible encoders: |
|||
Or mpeg4 with an avi container (Debian has disabled ffmpeg encoders, so install Marillat's package or use example above): |
|||
gst-launch-0.10 \ |
|||
gst-launch-0.10 avimux name=mux ! filesink location=test0.avi v4l2src device=/dev/video2 ! \ |
|||
v4l2src do-timestamp=true device=$VIDEO_DEVICE \ |
|||
video/x-raw-yuv,width=640,height=480,framerate=\(fraction\)30000/1001 ! ffmpegcolorspace ! \ |
|||
! video/x-raw-yuv,width=640,height=480,framerate=\(fraction\)30000/1001 \ |
|||
ffenc_mpeg4 ! queue ! mux. alsasrc device=hw:2,0 ! audio/x-raw-int,channels=2,rate=32000,depth=16 ! \ |
|||
! ffmpegcolorspace \ |
|||
audioconvert ! lame ! mux. |
|||
! ffenc_mpeg4 \ |
|||
! queue \ |
|||
! mux. \ |
|||
alsasrc do-timestamp=true device=$AUDIO_DEVICE \ |
|||
! audio/x-raw-int,channels=2,rate=32000,depth=16 \ |
|||
! audioconvert \ |
|||
! lame \ |
|||
! mux. \ |
|||
avimux name=mux \ |
|||
! filesink location=test0.avi |
|||
I get a file out of this that plays in mplayer, with blocky video and no sound. Avidemux cannot open the file. |
I get a file out of this that plays in mplayer, with blocky video and no sound. Avidemux cannot open the file. |
||
=== GStreamer 1.0: record from a bad analog signal to MJPEG video and RAW mono audio === |
|||
====Record to DVD-compliant MPEG2==== |
|||
''stamp'' is not available in GStreamer 1.0, ''cogcolorspace'' and ''ffmpegcolorspace'' have been replaced by ''videoconvert'': |
|||
gst-launch-1.0 \ |
|||
v4l2src do-timestamp=true device=$VIDEO_DEVICE do-timestamp=true \ |
|||
! 'video/x-raw,format=(string)YV12,width=(int)720,height=(int)576' \ |
|||
! videorate \ |
|||
! 'video/x-raw,format=(string)YV12,framerate=25/1' \ |
|||
! videoconvert \ |
|||
! 'video/x-raw,format=(string)YV12,width=(int)720,height=(int)576' |
|||
! jpegenc \ |
|||
! queue \ |
|||
! mux. \ |
|||
alsasrc do-timestamp=true device=$AUDIO_DEVICE \ |
|||
! 'audio/x-raw,format=(string)S16LE,rate=(int)48000,channels=(int)2' \ |
|||
! audiorate \ |
|||
! audioresample \ |
|||
! 'audio/x-raw,rate=(int)44100' \ |
|||
! audioconvert \ |
|||
! 'audio/x-raw,channels=(int)1' \ |
|||
! queue \ |
|||
! mux. \ |
|||
avimux name=mux ! filesink location=test.avi |
|||
As stated above, it is best to use both audiorate and videorate: you problably use the same chip to capture both audio stream and video stream so the audio part is subject to disturbance as well. |
|||
=== View pictures from a webcam === |
|||
Here are some miscellaneous examples for viewing webcam video: |
|||
gst-launch-0.10 \ |
|||
v4l2src do-timestamp=true use-fixed-fps=false \ |
|||
! video/x-raw-yuv,format=\(fourcc\)UYVY,width=320,height=240 \ |
|||
! ffmpegcolorspace \ |
|||
! autovideosink |
|||
gst-launch-0.10 \ |
|||
v4lsrc do-timestamp=true autoprobe-fps=false device=$VIDEO_DEVICE \ |
|||
! "video/x-raw-yuv,format=(fourcc)I420,width=160,height=120,framerate=10" \ |
|||
! autovideosink |
|||
=== Entrans: Record to DVD-compliant MPEG2 === |
|||
entrans -s cut-time -c 0-180 -v -x '.*caps' --dam -- --raw \ |
entrans -s cut-time -c 0-180 -v -x '.*caps' --dam -- --raw \ |
||
v4l2src queue-size=16 do-timestamp=true device= |
v4l2src queue-size=16 do-timestamp=true device=$VIDEO_DEVICE norm=PAL-BG num-buffers=-1 ! stamp silent=false progress=0 sync-margin=2 sync-interval=5 ! \ |
||
queue silent=false leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! dam ! \ |
queue silent=false leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! dam ! \ |
||
cogcolorspace ! videorate silent=false ! \ |
cogcolorspace ! videorate silent=false ! \ |
||
Line 85: | Line 352: | ||
* It seems to be important that the ''video/x-raw-yuv,width=720,height=576,framerate=25/1,interlaced=true,aspect-ratio=4/3''-statement is after ''videorate'' as videorate seems to drop the aspect-ratio-metadata otherwise resulting in files with aspect-ratio 1 in theis headers. Those files are probably played back warped and programs like dvdauthor complain. |
* It seems to be important that the ''video/x-raw-yuv,width=720,height=576,framerate=25/1,interlaced=true,aspect-ratio=4/3''-statement is after ''videorate'' as videorate seems to drop the aspect-ratio-metadata otherwise resulting in files with aspect-ratio 1 in theis headers. Those files are probably played back warped and programs like dvdauthor complain. |
||
== Ready-made scripts == |
|||
====Record to raw video==== |
|||
Although no two use cases are the same, it can be useful to see scripts used by other people. These can fill in blanks and provide inspiration for your own work. |
|||
If you don't care for sound, this simple version works for uncompressed video: |
|||
=== Bash script to record video tapes with GStreamer (work-in-progress) === |
|||
gst-launch-0.10 v4l2src device=/dev/video5 ! video/x-raw-yuv,width=640,height=480 ! avimux ! \ |
|||
filesink location=test0.avi |
|||
Note: as of August 2015, this script is still being fine-tuned. Come back in a month or two to see the final version. |
|||
tcprobe says this video-only file uses the I420 codec and gives the framerate as correct NTSC: |
|||
<nowiki> |
|||
$ tcprobe -i test1.avi |
|||
#!/bin/bash |
|||
[tcprobe] RIFF data, AVI video |
|||
# |
|||
[avilib] V: 29.970 fps, codec=I420, frames=315, width=640, height=480 |
|||
# GStreamer lets you build a pipeline (a DAG of elements) to process audio and video. |
|||
[tcprobe] summary for test1.avi, (*) = not default, 0 = not detected |
|||
# |
|||
import frame size: -g 640x480 [720x576] (*) |
|||
# At the time of writing, both the 0.1 and 1.0 series were installed by default. |
|||
frame rate: -f 29.970 [25.000] frc=4 (*) |
|||
# So far as I can tell, the 1.0 series has some kind of bug that breaks TV-recording utterly |
|||
no audio track: use "null" import module for audio |
|||
# (possibly a bug in selecting the output formats from v4l2src) |
|||
length: 315 frames, frame_time=33 msec, duration=0:00:10.510 |
|||
# If the 1.0 series gets fixed, you should only need to change a few commands here and there |
|||
# (see http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/chapter-porting-1.0.html) |
|||
The files will play in mplayer, using the codec [raw] RAW Uncompressed Video. |
|||
# |
|||
# We also use `v4l2-ctl` from the v4l-utils package to set the input source, |
|||
# and `sox` (from the `sox` package) to edit the audio |
|||
# |
|||
# Approximate system requirements for maximum quality settings (smaller images and lower bitrates need less): |
|||
# * about 10GB free for every hour of recording (6-7GB for temporary files, 3-4GB for the output file) |
|||
# * 3GHz processor (preferably at least dual core, so other processes don't steal the encoder's CPU time) |
|||
# * about 2GB of memory for every hour of recording (the second encoding pass needs to see the whole file) |
|||
HELP_MESSAGE="Usage: $0 --init |
|||
$0 --record <directory> |
|||
$0 --kill <directory> <timeout> |
|||
$0 --process <directory> |
|||
$0 --clean <directory> |
|||
Record a video into a directory (one directory per video). |
|||
--init create an initial ~/.gstreamer-record-scriptrc |
|||
please edit this file before your first recording |
|||
--record create a first-pass recording in the specified directory |
|||
--kill stop the recording in the specified directory after a specific amount time |
|||
see \`man sleep\` for details about allowed time formats |
|||
--process build the final recording in the specified directory |
|||
make sure to edit \`.gstreamer-record-scriptrc\` in that directory first |
|||
--clean delete temporary files |
|||
" |
|||
CONFIGURATION='# |
|||
# CONFIGURATION FOR GSTREAMER RECORD SCRIPT |
|||
# For more information, see http://www.linuxtv.org/wiki/index.php/GStreamer |
|||
# |
|||
# |
|||
# VARIABLES YOU NEED TO EDIT |
|||
# Every system and every use case is slightly different. |
|||
# Here are the things you will probably need to change: |
|||
# |
|||
# Set these based on your hardware/location: |
|||
VIDEO_DEVICE=${VIDEO_DEVICE:-/dev/video0} # `ls /dev/video*` for a list |
|||
AUDIO_DEVICE=${AUDIO_DEVICE:-hw:CARD=SAA7134,DEV=0} # `arecord -L` for a list |
|||
NORM=${NORM:-PAL} # (search Wikipedia for the exact norm in your country) |
|||
VIDEO_KBITRATE="${VIDEO_KBITRATE:-8000}" # test for yourself, but 8000 seems to produce a high quality result (we use bitrate/1000 for readability and to help calculate the values below) |
|||
AUDIO_BITRATE="${AUDIO_BITRATE:-32000}" # only bitrate supported by SAA7134 drivers - do `arecord -D $AUDIO_DEVICE --dump-hw-params -d 1 /dev/null` to see what your device supports |
|||
VIDEO_INPUT="${VIDEO_INPUT:-1}" # composite input - `v4l2-ctl --device=$VIDEO_DEVICE --list-inputs` for a list |
|||
# PAL video is approximately 720x576 resolution. VHS tapes have about half the horizontal quality, but this post convinced me to encode at 720x576 anyway: |
|||
# http://forum.videohelp.com/threads/215570-Sensible-resolution-for-VHS-captures?p=1244415#post1244415 |
|||
ASPECT_W="${ASPECT_W:-5}" |
|||
ASPECT_H="${ASPECT_H:-4}" |
|||
SIZE_MULTIPLIER="${SIZE_MULTIPLIER:-144}" # common multipliers include 144 (720x576 - PAL), 128 (640x480 - VGA) and 72 (360x288 - half PAL). Set this lower to reduce CPU usage |
|||
# GStreamer automatically keeps audio and video in sync, but most systems start recording audio shortly before video video. |
|||
# If your system has this problem... |
|||
# |
|||
# 1. run the first pass of AVI recording |
|||
# 2. watch the video in your favourite video player |
|||
# 3. adjust the audio delay until the video looks right |
|||
# 4. pass the relevant number to the second pass |
|||
# 5. if you plan to do several recordings in one session, you can set the following default value |
|||
# |
|||
# Note: you will have an opportunity to set the audio delay for a specific file later |
|||
AUDIO_DELAY="${AUDIO_DELAY:-0.3}" |
|||
# Some VCRs consistently run slightly fast or slow. If you suspect your VCR has this problem... |
|||
# |
|||
# Do a quick test: |
|||
# 1. Run this command: gst-launch-0.10 v4l2src ! fpsdisplaysink fps-update-interval=1000 |
|||
# * this will measure your average frame rate every second. After a few seconds, it should say "drop rate 25.00" |
|||
# 2. Change "FRAMERATE" below to your actual frame rate (e.g. 2502/100 if your frame rate is 25.02 FPS) |
|||
# |
|||
# Or if you want to be precise: |
|||
# 1. Run this command: gst-launch-0.10 v4l2src ! fpsdisplaysink fps-update-interval=100000 |
|||
# * this will measure your average frame rate every 100 seconds (you can try different intervals if you like) |
|||
# 2. wait 100 seconds, then record the number of frames dropped |
|||
# 3. wait another 100 seconds, then record the number of frames dropped again |
|||
# 4. calculate (result of step 4) - (result of step 3) - 1 |
|||
# * e.g. 5007 - 2504 - 1 == 2502 |
|||
# * you need to subtract one because fpsdisplaysink drops one frame every time it displays the counter |
|||
# 5. Change "FRAMERATE" below to (result of step 4)/100 (e.g. 2502/100 if 2502 frames were dropped) |
|||
FRAMERATE="${FRAMERATE:-2500/100}" |
|||
# |
|||
# VARIABLES YOU MIGHT NEED TO EDIT |
|||
# These are defined in the script, but you can override them here if you need non-default values: |
|||
# |
|||
# set this to 1 to get lots of debugging data (including DOT graphs of your pipelines): |
|||
#DEBUG_MODE= |
|||
# Set these to alter the recording quality: |
|||
#GST_MPEG4_OPTS="..." |
|||
#GST_MPEG4_OPTS_PASS1="..." |
|||
#GST_MPEG4_OPTS_PASS2="..." |
|||
#GST_LAME_OPTS="..." |
|||
# Set these to control the audio/video pipelines: |
|||
#GST_QUEUE="..." |
|||
#GST_VIDEO_SRC="..." |
|||
#GST_AUDIO_SRC="..." |
|||
' |
|||
# |
|||
# CONFIGURATION SECTION |
|||
# |
|||
CONFIG_SCRIPT="$HOME/.gstreamer-record-scriptrc" |
|||
[ -e "$CONFIG_SCRIPT" ] && source "$CONFIG_SCRIPT" |
|||
source <( echo "$CONFIGURATION" ) |
|||
# set this to 1 to get lots of debugging data (including DOT graphs of your pipelines): |
|||
DEBUG_MODE="${DEBUG_MODE:-}" |
|||
# `gst-inspect` has more information here too: |
|||
GST_MPEG4_OPTS="${GST_MPEG4_OPTS:-interlaced=true bitrate=$(( VIDEO_KBITRATE * 1000 )) max-key-interval=15}" |
|||
GST_MPEG4_OPTS_PASS1="${GST_MPEG4_OPTS_PASS1:-rc-buffer-size=$(( VIDEO_KBITRATE * 4000 )) rc-max-rate=$(( VIDEO_KBITRATE * 2000 )) rc-min-rate=$(( VIDEO_KBITRATE * 875 )) pass=pass1 $GST_MPEG4_OPTS}" # pictures of white noise will max out your bitrate - setting min/max bitrates ensures the video after a period of snow will be reasonable quality |
|||
GST_MPEG4_OPTS_PASS2="${GST_MPEG4_OPTS_PASS2:-pass=pass2 $GST_MPEG4_OPTS}" # pictures of white noise will max out your bitrate - setting min/max bitrates ensures the video after a period of snow will be reasonable quality |
|||
GST_LAME_OPTS="${GST_LAME_OPTS:-quality=0}" |
|||
# `gst-inspect-0.10 <element> | less -i` for a list of properties (e.g. `gst-inspect-0.10 v4l2src | less -i`): |
|||
GST_QUEUE="${GST_QUEUE:-queue max-size-buffers=0 max-size-time=0 max-size-bytes=0}" |
|||
GST_VIDEO_FORMAT="${GST_VIDEO_FORMAT:-video/x-raw-yuv,width=$(( ASPECT_W * SIZE_MULTIPLIER )),height=$(( ASPECT_H * SIZE_MULTIPLIER )),framerate=$FRAMERATE,interlaced=true,aspect-ratio=$ASPECT_W/$ASPECT_H}" |
|||
GST_AUDIO_FORMAT="${GST_AUDIO_FORMAT:-audio/x-raw-int,channels=2,rate=$AUDIO_BITRATE,depth=16}" |
|||
GST_VIDEO_SRC="${GST_VIDEO_SRC:-v4l2src device=$VIDEO_DEVICE do-timestamp=true norm=$NORM ! $GST_QUEUE ! videorate silent=false ! $GST_VIDEO_FORMAT}" |
|||
GST_AUDIO_SRC="${GST_AUDIO_SRC:-alsasrc device=$AUDIO_DEVICE do-timestamp=true ! $GST_QUEUE ! audioconvert ! audiorate silent=false ! $GST_AUDIO_FORMAT}" |
|||
# |
|||
# MAIN LOOP |
|||
# You should only need to edit this if you're making significant changes to the way the script works |
|||
# |
|||
echo_bold() { |
|||
echo -e "\e[1m$@\e[0m" |
|||
} |
|||
set_directory() { |
|||
if [ -z "$1" ] |
|||
then |
|||
echo "$HELP_MESSAGE" |
|||
exit 1 |
|||
else |
|||
DIRECTORY="$( readlink -f "$1" )" |
|||
FILE="$DIRECTORY/gstreamer-recording" |
|||
URI="file://$( echo "$FILE" | sed -e 's/ /%20/g' )" |
|||
mkdir -p -- "$DIRECTORY" || exit |
|||
GST_CMD="gst-launch-0.10" |
|||
if [ -n "$DEBUG_MODE" ] |
|||
then |
|||
export GST_DEBUG_DUMP_DOT_DIR="$DIRECTORY/graphs" |
|||
if [ -d "$GST_DEBUG_DUMP_DOT_DIR" ] |
|||
then |
|||
rm -f "$GST_DEBUG_DUMP_DOT_DIR"/*.dot |
|||
else |
|||
mkdir "$GST_DEBUG_DUMP_DOT_DIR" |
|||
fi |
|||
GST_CMD="$GST_CMD -v --gst-debug=2" |
|||
fi |
|||
fi |
|||
} |
|||
case "$1" in |
|||
-i|--i|--in|--ini|--init) |
|||
if [ -e "$CONFIG_SCRIPT" ] |
|||
then |
|||
echo "Please delete $CONFIG_SCRIPT if you want to recreate it" |
|||
else |
|||
echo "$CONFIGURATION" > "$CONFIG_SCRIPT" |
|||
echo "Please edit $CONFIG_SCRIPT to match your system" |
|||
fi |
|||
;; |
|||
-r|--r|--re|--rec|--reco|--recor|--record) |
|||
# Build a pipeline with sources being encoded as MPEG4 video and FLAC audio, then being muxed into a Matroska container. |
|||
# FLAC and Matroska are used during encoding to ensure we don't lose much data between passes |
|||
set_directory "$2" |
|||
if [ -e "$FILE-temp.mkv" ] |
|||
then |
|||
echo "Please delete the old $FILE-temp.mkv before making a new recording" |
|||
exit 1 |
|||
fi |
|||
v4l2-ctl --device="$VIDEO_DEVICE" --set-input $VIDEO_INPUT |
|||
echo_bold "Press ctrl+c to finish recording" |
|||
sudo nice -20 sh -c "echo \$\$ > '$FILE-temp.pid' && exec $GST_CMD -e \ |
|||
$GST_VIDEO_SRC ! ffenc_mpeg4 $GST_MPEG4_OPTS_PASS1 'multipass-cache-file=$FILE-temp.log' ! $GST_QUEUE ! mux. \ |
|||
$GST_AUDIO_SRC ! flacenc ! $GST_QUEUE ! mux. \ |
|||
matroskamux name=mux ! filesink location='$FILE-temp.mkv'" |
|||
echo_bold extracting audio... |
|||
$GST_CMD -q \ |
|||
uridecodebin uri="$URI-temp.mkv" \ |
|||
! progressreport \ |
|||
! audioconvert \ |
|||
! audiorate \ |
|||
! wavenc \ |
|||
! filesink location="$FILE-temp.wav" \ |
|||
| while read ; do echo -n "$REPLY"$'\r'; done |
|||
echo |
|||
cat <<EOF > "$FILE.conf" |
|||
# |
|||
# NOISE REDUCTION (optional) |
|||
# |
|||
# To reduce noise in the final stream, identify a period in the recording |
|||
# which only has background noise (a second or two should be enough) |
|||
# |
|||
# If you want to reduce noise, set these two variables to the start and |
|||
# duration of the noise period (in seconds): |
|||
# |
|||
NOISE_START= |
|||
NOISE_DURATION= |
|||
# |
|||
# AUDIO DELAY |
|||
# |
|||
# To add a period of silence at the beginning of the video, watch the .mkv |
|||
# file and decide how much silence you want. |
|||
# |
|||
# If you want to add a delay, set this variable to the duration in seconds |
|||
# (can be fractional): |
|||
# |
|||
AUDIO_DELAY=$AUDIO_DELAY |
|||
# Uncomment the following when you've finished editing |
|||
# (it's just here to prevent silly mistakes) |
|||
# Then re-run the script with --process |
|||
#CONFIG_FILE_DONE=done |
|||
EOF |
|||
cat <<EOF |
|||
Now please check $FILE-temp.mkv |
|||
If you've recorded the wrong thing, delete it and start again |
|||
Otherwise edit $FILE.conf and set any variables you want |
|||
EOF |
|||
;; |
|||
-k|--k|--ki|--kil|--kill) |
|||
set_directory "$2" |
|||
sudo sh -c "sleep '$3' && kill -s 2 '$(< "$FILE-temp.pid" )'" |
|||
;; |
|||
-p|--p|--pr|--pro|--proc|--proce|--proces|--process) |
|||
set_directory "$2" |
|||
[ -e "$FILE.conf" ] && source "$FILE.conf" |
|||
if [ -z "$CONFIG_FILE_DONE" ] |
|||
then |
|||
echo "Please edit $FILE.conf before processing the file" |
|||
exit 1 |
|||
fi |
|||
if [ -e "$FILE.avi" ] |
|||
then |
|||
echo "Please delete the old $FILE.avi before making a new recording" |
|||
exit 1 |
|||
fi |
|||
AUDIO_FILE="$FILE-temp.flac" |
|||
# Note: to make this script support a system where the video is ahead of the audio, |
|||
# change the `sox` commands below to `trim` the source audio instead of inserting silence before it |
|||
# Prepend $AUDIO_DELAY seconds of silence to the audio, calculate a noise profile, and reduce noise based on that profile |
|||
if [ -n "$NOISE_START" -a -n "$AUDIO_DELAY" ] |
|||
then |
|||
echo_bold "improving audio..." |
|||
sox -S \ |
|||
-t wav <( sox -V1 -n -r $AUDIO_BITRATE -c 2 -t wav - trim 0.0 $AUDIO_DELAY ) "$FILE-temp.wav" "$FILE-temp.flac" \ |
|||
noisered <( sox -V1 "$FILE-temp.wav" -t wav - trim "$NOISE_START" "$NOISE_DURATION" | sox -t wav - -n noiseprof - ) 0.21 |
|||
elif [ -n "$NOISE_START" ] |
|||
then |
|||
echo_bold "improving audio..." |
|||
sox -S \ |
|||
"$FILE-temp.wav" "$FILE-temp.flac" \ |
|||
noisered <( sox -V1 "$FILE-temp.wav" -t wav - trim "$NOISE_START" "$NOISE_DURATION" | sox -t wav - -n noiseprof - ) 0.21 |
|||
elif [ -n "$AUDIO_DELAY" ] |
|||
then |
|||
echo_bold "improving audio..." |
|||
sox -S \ |
|||
-t wav <( sox -V1 -n -r $AUDIO_BITRATE -c 2 -t wav - trim 0.0 $AUDIO_DELAY ) "$FILE-temp.wav" "$FILE-temp.flac" |
|||
else |
|||
AUDIO_FILE="$FILE-temp.wav" |
|||
fi |
|||
# Do a second pass over the video (shrinking the file size), and replace the audio with the improved version: |
|||
echo_bold "building final file..." |
|||
nice -n +20 $GST_CMD -q \ |
|||
uridecodebin uri="$URI-temp.mkv" name=video \ |
|||
uridecodebin uri="file://$AUDIO_FILE" name=audio \ |
|||
video. ! progressreport ! deinterlace ! ffenc_mpeg4 $GST_MPEG4_OPTS_PASS2 "multipass-cache-file=$FILE-temp.log" ! $GST_QUEUE ! mux.video_0 \ |
|||
audio. ! audioconvert ! audiorate ! lamemp3enc $GST_LAME_OPTS ! $GST_QUEUE ! mux.audio_0 \ |
|||
avimux name=mux ! filesink location="$FILE.avi" |
|||
echo_bold "Saved to $FILE.avi" |
|||
;; |
|||
-c|--c|--cl|--clea|--clean) |
|||
rm -f "$2"-temp.* |
|||
;; |
|||
*) |
|||
echo "$HELP_MESSAGE" |
|||
esac |
|||
</nowiki> |
|||
This script generates a video in two passes: first it records and builds statistics, then lets you analyse the output, then builds an optimised final version. |
|||
===Bash script to |
=== Bash script to record video tapes with entrans === |
||
<nowiki>#!/bin/bash |
<nowiki>#!/bin/bash |
||
Line 197: | Line 788: | ||
nice -n -10 entrans -s cut-time -c 0-$laenge -m --dam -- --raw \ |
nice -n -10 entrans -s cut-time -c 0-$laenge -m --dam -- --raw \ |
||
v4l2src queue-size=16 do-timestamp=true device= |
v4l2src queue-size=16 do-timestamp=true device=$VIDEO_DEVICE norm=PAL-BG num-buffers=-1 ! stamp sync-margin=2 sync-interval=5 silent=false progress=0 ! \ |
||
queue leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! dam ! \ |
queue leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! dam ! \ |
||
cogcolorspace ! videorate ! \ |
cogcolorspace ! videorate ! \ |
||
Line 216: | Line 807: | ||
rm ~/.lock_shutdown.digitalisieren</nowiki> |
rm ~/.lock_shutdown.digitalisieren</nowiki> |
||
The script uses a command line similar to [[GStreamer#Record_to_DVD-compliant_MPEG2|this]] to produce a DVD compliant MPEG2 file. |
The script uses a command line similar to [[GStreamer#Record_to_DVD-compliant_MPEG2|this]] to produce a DVD compliant MPEG2 file. |
||
* The script |
* The script aborts if another instance is already running. |
||
* If not it asks for the length of the tape and its description |
* If not it asks for the length of the tape and its description |
||
* It records to ''description.mpg'' or if this file already exists to ''description.0.mpg'' and so on for the given time plus 10 minutes. The target-directory has to be specified in the beginning of the script. |
* It records to ''description.mpg'' or if this file already exists to ''description.0.mpg'' and so on for the given time plus 10 minutes. The target-directory has to be specified in the beginning of the script. |
||
* As setting of the inputs and settings of the capture device is only partly possible via GStreamer other tools are used. |
* As setting of the inputs and settings of the capture device is only partly possible via GStreamer other tools are used. |
||
* Adjust the settings to match your input sources, the recording volume, capturing saturation and so on. |
* Adjust the settings to match your input sources, the recording volume, capturing saturation and so on. |
||
====Record from a bad analog signal to MJPEG video and RAW mono audio==== |
|||
All advices read above still apply. However, in Gstreamer 1.0, stamp element does not exist anymore. |
|||
cogcolorspace and ffmpegcolorspace have been replaced by videoconvert. |
|||
gst-launch-1.0 -v avimux name=mux v4l2src device=/dev/video1 do-timestamp=true ! 'video/x-raw,format=(string)YV12,width=(int)720,height=(int)576' !\ |
|||
videorate ! 'video/x-raw,format=(string)YV12,framerate=25/1' ! videoconvert ! 'video/x-raw,format=(string)YV12,width=(int)720,height=(int)576' !\ |
|||
jpegenc ! queue ! mux. alsasrc device=hw:2,0 ! 'audio/x-raw,format=(string)S16LE,rate=(int)48000,channels=(int)2' !\ |
|||
audiorate ! audioresample ! 'audio/x-raw,rate=(int)44100' ! audioconvert ! 'audio/x-raw,channels=(int)1' ! queue ! mux. mux. ! filesink location=test.avi |
|||
This pipeline get video from V4L2 device "/dev/video1" (and compress it to MJPEG) and sound from ALSA device "hw:2,0" to record it in an AVI file. |
|||
Adapt inputs formats according to your chip and proceed to transformations after videorate/audiorate. |
|||
As stated above, it is best to use both audiorate and videorate: you problably use the same chip to capture both audio stream and video stream so the audio part is subject to disturbance as well. |
|||
==Using GStreamer to view pictures from a Webcam== |
|||
gst-launch-0.10 v4l2src use-fixed-fps=false ! video/x-raw-yuv,format=\(fourcc\)UYVY,width=320,height=240 \ |
|||
! ffmpegcolorspace ! ximagesink |
|||
gst-launch-0.10 v4lsrc autoprobe-fps=false device=/dev/video0 ! "video/x-raw-yuv, width=160, height=120, \ |
|||
framerate=10, format=(fourcc)I420" ! xvimagesink |
|||
==Converting formats== |
|||
To convert the files to matlab (didn't work for me): |
|||
mencoder test0.avi -ovc raw -vf format=bgr24 -o test0m.avi -ffourcc none |
|||
For details, see gst-launch and google; the plugins in particular are poorly documented so far. |
|||
Revision as of 00:03, 5 August 2015
GStreamer is a toolkit for building audio- and video-processing pipelines. A pipeline might stream video from a file to a network, or add an echo to a recording, or (most interesting to us) capture the output of a Video4Linux device. Gstreamer is most often used to power graphical applications such as Totem, but this page will explain how to build an encoder using its command-line interface.
Getting Started with GStreamer
GStreamer, its most common plugins tools like are available through your distribution's package manager. But entrans
and some of the plugins used in the examples below are not. You can find their sources bundled by the GEntrans project at sourceforge. Google may help you to find precompiled packages for your distro.
Two series of GStreamer are available - 0.10 and 1.0. Most Linux distributions include both, but this page discusses the older 0.10 series because I was unable to get the 1.0 series to work with my TV card. Converting the commands below to work with 1.0 is mostly just search-and-replace work (e.g. changing instances of ff
to av
because of the switch from ffmpeg
to libavcodec
). See the porting guide for more.
Using GStreamer with gst-launch
gst-launch
is the standard command-line interface to GStreamer. Here's the simplest pipline you can build:
gst-launch-0.10 fakesrc ! fakesink
This connects a single (fake) source to a single (fake) sink using the 0.10 series of GStreamer:
To learn more about the source and sink elements, do:
gst-inspect-0.10 fakesrc gst-inspect-0.10 sink
If you have installed Graphviz, you can build a graph like the above yourself:
mkdir gst-visualisations GST_DEBUG_DUMP_DOT_DIR=gst-visualisations gst-launch-0.10 fakesrc ! fakesink dot -Tpng gst-visualisations/*-gst-launch.PLAYING_READY.dot > my-pipeline.png
To get graphs of the example pipelines below, prepend GST_DEBUG_DUMP_DOT_DIR=gst-visualisations
to the gst-launch
command. Run this command to generate a PNG version of GStreamer's most interesting stage:
dot -Tpng gst-visualisations/*-gst-launch.PLAYING_READY.dot > my-pipeline.png
Remember to empty the gst-visualisations
directory between runs.
Using GStreamer with entrans
gst-launch
is the main command-line interface to GStreamer, available by default. But entrans
is a bit smarter:
- it provides partly-automatically composing of GStreamer pipelines
- it allows cutting of streams; e.g. to capture for a predefined duration. That ensures headers are written correctly, which is not always the case if you close
gst-launch
by pressing Ctrl+C. To use this feature one has to insert a dam element after the first queue of each part of the pipeline
Using GStreamer for V4L TV capture
No two use cases for encoding are quite alike. Is your processor fast enough to encode high quality video? Do you want to play your video in DVD players, or is it enough that it works in your version of VLC? Which obscure quirks does your system have?
Use GStreamer if you want the best video quality possible with your hardware, and don't mind spending a weekend browsing the Internet for information.
Avoid GStreamer if you just want something quick-and-dirty, or can't stand poorly documented programs.
Why prefer GStreamer?
GStreamer is better than most tools at synchronising audio with video in disturbed sources such as VHS tapes. If you specify your input is (say) 25 frames per second video and 48,000kHz audio, most tools will synchronise audio and video simply by writing 1 video frame, 1,920,000 audio frames, 1 video frame and so on. This calculation can lead to errors for some sources:
- if the audio and video devices take different amounts of time to initialise. For example, the first audio frame might be delivered to GStreamer 0.01 seconds after it was requested, but the first video frame might not be delivered until 0.7 seconds after it was requested, causing all video to be 0.6 seconds behind the audio
mencoder
's -delay option solves this by delaying the audio
- if frames are dropped, audio and video shift relative each other. For example if your CPU is not fast enough and sometimes drops a video frame, after 25 dropped audio frames the video will be one second ahead of the audio
mencoder
's -harddup option solves this by duplicating other frames to fill in the gaps
- if your hardware has a slightly inaccurate clock (common in low-cost home-user products). For example, your webcam might deliver 25.01 video frames per second and your audio source might deliver 47,999kHz, causing your audio and video drifting apart by a second or so per hour
- video tapes are especially problematic here - if you've ever seen a VCR struggle with a low quality recording (e.g. the few seconds between two recordings on a tape), you've seen them adjusting the tape speed to accurately track the source. Frame counts can vary enough during these periods to instantly desynchronise audio and video
mencoder
has no solution for this problem
GStreamer solves these problems by attaching a timestamp to each incoming frame based on the time GStreamer receives the frame. It can then mux the sources back together accurately using these timestamps, either by using a format that supports variable framerates or by duplicating frames to fill in the blanks:
- If you choose a container format that supports timestamps (e.g. Matroska), timestamps are automatically written to the file and used to vary the playback speed
- If you choose a container format that does not support timestamps (e.g. AVI), you must duplicate other frames to fill in the gaps by adding the
videorate
andaudiorate
plugins to the end of the relevant pipelines
To get accurate timestamps, specify the do-timestamp=true
option for all your sources. This will ensure accurate timestamps are retrieved from the driver where possible. Sadly, many v4l2 drivers don't support timestamps - GStreamer will add timestamps for these drivers to stop audio and video drifting apart, but you will need to fix the constant time-offset yourself (discussed below).
Common caputuring issues and their solutions
Determining your video source
See all your video sources by doing:
ls /dev/video*
One of these is the card you want. Most people only have one, or can figure it out by disconnecting devices and rerunning the above command. Otherwise, check the capabilites of each device:
for VIDEO_DEVICE in /dev/video* ; do echo ; echo ; echo $VIDEO_DEVICE ; echo ; v4l2-ctl --device=$VIDEO_DEVICE --list-inputs ; done
Usually you will see e.g. a webcam with a single input and a TV card with multiple inputs. If you're still not sure which one is yours, try each one in turn:
v4l2-ctl --device=<device> --set-input=<whichever-input-you-want-to-use> gst-launch-0.10 v4l2src do-timestamp=true device=<device> ! autovideosink
(if your source is a VCR, remember to play a video so you know the right one when you see it)
If you like, you can store your device in an environment variable:
VIDEO_DEVICE=<device>
All further examples will use $VIDEO_DEVICE
in place of an actual video device
Determining your audio source
See all of our audio sources by doing:
arecord -l
Again, it should be fairly obvious which of these is the right one. Get the device names by doing:
arecord -L | grep ^hw:
If you're not sure which one you want, try each in turn:
gst-launch-0.10 alsasrc do-timestamp=true device=hw:<device> ! autoaudiosink
Again, you should hear your tape playing when you get the right one. Note: always use an ALSA hw device, as they are closest to the hardware. Pulse audio devices and ALSA's plughw devices add extra layers that, while more convenient for most uses, only cause headaches for us.
Optionally set your device in an environment variable:
AUDIO_DEVICE=<device>
All further examples will use $AUDIO_DEVICE
in place of an actual audio device
Reducing Jerkiness
If motion that should appear smooth instead stops and starts, try the following:
Check for muxer issues. Some muxers need big chunks of data, which can cause one stream to pause while it waits for the other to fill up. Change your pipeline to pipe your audio and video directly to their own filesink
s - if the separate files don't judder, the muxer is the problem.
- If the muxer is at fault, add ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 immediately before each stream goes to the muxer
- queues have hard-coded maximum sizes - you can chain queues together if you need more buffering than one buffer can hold
Check your CPU load. When GStreamer uses 100% CPU, it may need to drop frames to keep up.
- If frames are dropped occasionally when CPU usage spikes to 100%, add a (larger) buffer to help smooth things out.
- this can be a source's internal buffer (e.g. v4l2src queue-size=16 or alsasrc buffer-time=2000000), or it can be an extra buffering step in your pipeline (! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0)
- If frames are dropped when other processes have high CPU load, consider using nice to make sure encoding gets CPU priority
- If frames are dropped regularly, use a different codec, change the parameters, lower the resolution, or otherwise choose a less resource-intensive solution
As a general rule, you should try increasing buffers first - if it doesn't work, it will just increase the pipeline's latency a bit. Be careful with nice
, as it can slow down or even halt your computer.
Check for incorrect timestamps. If your video driver works by filling up an internal buffer then passing a cluster of frames without timestamps, GStreamer will think these should all have (nearly) the same timestamp. Make sure you have a videorate
element in your pipeline, then add silent=false to it. If it reports many framedrops and framecopies even when the CPU load is low, the driver is probably at fault.
videorate
on its own will actually make this problem worse by picking one frame and replacing all the others with it. Instead installentrans
and add its stamp element between v4l2src and queue (e.g. v4l2src do-timestamp=true ! stamp sync-margin=2 sync-interval=5 ! videorate ! queue)- stamp intelligently guesses timestamps if drivers don't support timestamping. Its sync- options drop or copy frames to get a nearly-constant framerate. Using
videorate
as well does no harm and can solve some remaining problems
- stamp intelligently guesses timestamps if drivers don't support timestamping. Its sync- options drop or copy frames to get a nearly-constant framerate. Using
Measuring your video framerate
As mentioned above, some video cards produce slightly too many (or too few) frames per second. To check your system's actual frames per second, start your video source (e.g. a VCR or webcam) then run this command:
gst-launch-0.10 v4l2src ! fpsdisplaysink fps-update-interval=100000
- Let it run for 100 seconds to get a large enough sample. It should print some statistics in the bottom of the window - write down the number of frames dropped
- Let it run for another 100 seconds, then write down the new number of frames dropped
- Calculate
(second number) - (first number) - 1
(e.g. 5007 - 2504 - 1 == 2502)- You need to subtract one because
fpsdisplaysink
drops one frame every time it displays the counter
- You need to subtract one because
- That number is exactly one hundred times your framerate, so you should tell GStreamer e.g.
framerate=2502/100
Fixing a constant time-offset
If your hardware doesn't support timestamps, your encoded audio might have a constant desynchronisation between the audio and the video. This offset is based on too many factors to isolate (e.g. a new driver version might increase or decrease the value), so fixing this is a manual process that probably needs to be done every time you encode a file.
Calculate your desired offset:
- Record a video using one of the techniques below
- Open the video in your favourite video player
- Adjust the A/V sync until it looks right to you - different players put this in different places, for example it's Tools > Track Synchronisation in VLC
- write down your desired time-offset
If possible, look for (or create) clapperboard-like events - moments where an obvious visual element occurred at the same moment as an obvious audio moment. A hand clapping or a cup being placed on a table are good examples.
Extract your audio:
gst-launch-0.10 \ uridecodebin uri="file:///path/to/my.file" \ ! progressreport \ ! audioconvert \ ! audiorate \ ! wavenc \ ! filesink location="/path/to/my.file.wav"
If you have a clapperboard event, you might want to examine the extracted file in an audio editor like Audacity. You should be able to see the exact time of the clap sound in the audio stream, watch the video to isolate the exact frame, and use that information to calculate the precise audio delay.
Use sox
to prepend some silence:
sox -S -t wav <( sox -V1 -n -r <bitrate> -c <audio-channels> -t wav - trim 0.0 <delay-in-seconds> ) "/path/to/my.file.wav" "/path/to/my.file.flac"
Mix the new audio and the old video into a new file:
gst-launch-0.10 \ uridecodebin uri="file:///path/to/my.file" \ ! video/your-video-settings \ ! mux. \ uridecodebin uri="file:///path/to/my.file.flac" \ ! audioconvert \ ! audiorate \ ! your_preferred_audio_encoder \ ! mux. \ avimux name=mux \ ! filesink location="/path/to/my.file.new"
Note: you can apply any sox
filter this way, like normalising the volume or removing background noise.
A specific solution for measuring your time-offset
Measuring your time-offset will probably be the most unique part of your recording solution. Here is one solution you could use when digitising old VHS tapes:
- Connect a camcorder to your VCR
- Tune the VCR so it shows the camcorder output when it's not playing
- Start your GStreamer pipeline
- Clap your hands in front of the camcorder so you can later measure A/V synchronisation
- Press play on the VCR
- When the video has finished recording, split the audio and video tracks as described above
- Examine the audio with Audacity and identify the precise time of the clap sound
- Examine the video with avidemux and identify the frame of the clap image
You'll probably need to change every step of the above to match your situation, but hopefully it will provide some inspiration.
Avoiding pitfalls of disturbed video signals
- Most video capturing devices send EndOfStream signals if the quality of the input signal is too bad or if there is a period of snow. This aborts the capturing process. To prevent the device from sending EOS set num-buffers=-1 on the v4l2src element.
- The stamp plugin gets confused by periods of snow and does faulty timestamps and framedropping. This effect itself doesn't matter as stamp recovers normal behaviour when the brake is over. But chances are good that the buffers are full of old weird stamped frames. stamp then drops only one of them each sync-interval with the result that it can take several minutes until everything works fine again. To solve this problem set leaky=2 on each queue element to allow dropping of old frames which aren't needed any longer.
- Periods of noise (snow, bad signal etc.) are hard to encode. Variable bitrate encoders will often drive up the bitrate during the noise then down afterwards to maintain the average bitrate. To minimise these issues, specify a minimum and maximum bitrate in your encoder
Sample pipelines
At some point, you will probably need to build your own GStreamer pipeline. Here are some examples to give you the basic idea:
Record raw video only
A simple pipeline that initialises one video source, sets the video format, muxes it into a file format, then saves it to a file:
gst-launch-0.10 \ v4l2src do-timestamp=true device=$VIDEO_DEVICE \ ! video/x-raw-yuv,width=640,height=480 \ ! avimux ! filesink location=test0.avi
tcprobe
says this video-only file uses the I420 codec and gives the framerate as correct NTSC:
$ tcprobe -i test1.avi [tcprobe] RIFF data, AVI video [avilib] V: 29.970 fps, codec=I420, frames=315, width=640, height=480 [tcprobe] summary for test1.avi, (*) = not default, 0 = not detected import frame size: -g 640x480 [720x576] (*) frame rate: -f 29.970 [25.000] frc=4 (*) no audio track: use "null" import module for audio length: 315 frames, frame_time=33 msec, duration=0:00:10.510
The files will play in mplayer, using the codec [raw] RAW Uncompressed Video.
Record to ogg theora
Here is a more complex example that initialises two sources - one video and audio:
gst-launch-0.10 \ v4l2src do-timestamp=true device=$VIDEO_DEVICE \ ! video/x-raw-yuv,width=640,height=480,framerate=\(fraction\)30000/1001 \ ! ffmpegcolorspace \ ! theoraenc \ ! queue \ ! mux. \ alsasrc do-timestamp=true device=$AUDIO_DEVICE \ ! audio/x-raw-int,channels=2,rate=32000,depth=16 \ ! audioconvert \ ! vorbisenc \ ! mux. \ oggmux name=mux \ ! filesink location=test0.ogg
Each source is encoded and piped into a muxer that builds an ogg-formatted data stream. The stream is then saved to test0.ogg
. Note the required workaround to get sound on a saa7134 card, which is set at 32000Hz (cf. bug). However, I was still unable to get sound output, though mplayer claimed there was sound -- the video is good quality:
VIDEO: [theo] 640x480 24bpp 29.970 fps 0.0 kbps ( 0.0 kbyte/s) Selected video codec: [theora] vfm: theora (Theora (free, reworked VP3)) AUDIO: 32000 Hz, 2 ch, s16le, 112.0 kbit/10.94% (ratio: 14000->128000) Selected audio codec: [ffvorbis] afm: ffmpeg (FFmpeg Vorbis decoder)
Record to mpeg4
This is similar to the above, but generates an AVI file with streams encoded using AVI-compatible encoders:
gst-launch-0.10 \ v4l2src do-timestamp=true device=$VIDEO_DEVICE \ ! video/x-raw-yuv,width=640,height=480,framerate=\(fraction\)30000/1001 \ ! ffmpegcolorspace \ ! ffenc_mpeg4 \ ! queue \ ! mux. \ alsasrc do-timestamp=true device=$AUDIO_DEVICE \ ! audio/x-raw-int,channels=2,rate=32000,depth=16 \ ! audioconvert \ ! lame \ ! mux. \ avimux name=mux \ ! filesink location=test0.avi
I get a file out of this that plays in mplayer, with blocky video and no sound. Avidemux cannot open the file.
GStreamer 1.0: record from a bad analog signal to MJPEG video and RAW mono audio
stamp is not available in GStreamer 1.0, cogcolorspace and ffmpegcolorspace have been replaced by videoconvert:
gst-launch-1.0 \ v4l2src do-timestamp=true device=$VIDEO_DEVICE do-timestamp=true \ ! 'video/x-raw,format=(string)YV12,width=(int)720,height=(int)576' \ ! videorate \ ! 'video/x-raw,format=(string)YV12,framerate=25/1' \ ! videoconvert \ ! 'video/x-raw,format=(string)YV12,width=(int)720,height=(int)576' ! jpegenc \ ! queue \ ! mux. \ alsasrc do-timestamp=true device=$AUDIO_DEVICE \ ! 'audio/x-raw,format=(string)S16LE,rate=(int)48000,channels=(int)2' \ ! audiorate \ ! audioresample \ ! 'audio/x-raw,rate=(int)44100' \ ! audioconvert \ ! 'audio/x-raw,channels=(int)1' \ ! queue \ ! mux. \ avimux name=mux ! filesink location=test.avi
As stated above, it is best to use both audiorate and videorate: you problably use the same chip to capture both audio stream and video stream so the audio part is subject to disturbance as well.
View pictures from a webcam
Here are some miscellaneous examples for viewing webcam video:
gst-launch-0.10 \ v4l2src do-timestamp=true use-fixed-fps=false \ ! video/x-raw-yuv,format=\(fourcc\)UYVY,width=320,height=240 \ ! ffmpegcolorspace \ ! autovideosink
gst-launch-0.10 \ v4lsrc do-timestamp=true autoprobe-fps=false device=$VIDEO_DEVICE \ ! "video/x-raw-yuv,format=(fourcc)I420,width=160,height=120,framerate=10" \ ! autovideosink
Entrans: Record to DVD-compliant MPEG2
entrans -s cut-time -c 0-180 -v -x '.*caps' --dam -- --raw \ v4l2src queue-size=16 do-timestamp=true device=$VIDEO_DEVICE norm=PAL-BG num-buffers=-1 ! stamp silent=false progress=0 sync-margin=2 sync-interval=5 ! \ queue silent=false leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! dam ! \ cogcolorspace ! videorate silent=false ! \ 'video/x-raw-yuv,width=720,height=576,framerate=25/1,interlaced=true,aspect-ratio=4/3' ! \ queue silent=false leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! \ ffenc_mpeg2video rc-buffer-size=1500000 rc-max-rate=7000000 rc-min-rate=3500000 bitrate=4000000 max-key-interval=15 pass=pass1 ! \ queue silent=false leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! mux. \ pulsesrc buffer-time=2000000 do-timestamp=true ! \ queue silent=false leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! dam ! \ audioconvert ! audiorate silent=false ! \ audio/x-raw-int,rate=48000,channels=2,depth=16 ! \ queue silent=false max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! \ ffenc_mp2 bitrate=192000 ! \ queue silent=false leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! mux. \ ffmux_mpeg name=mux ! filesink location=my_recording.mpg
This captures 3 minutes (180 seconds, see first line of the command) to my_recording.mpg and even works for bad input signals.
- I wasn't able to figure out how to produce a mpeg with ac3-sound as neither ffmux_mpeg nor mpegpsmux support ac3 streams at the moment. mplex does but I wasn't able to get it working as one needs very big buffers to prevent the pipeline from stalling and at least my GStreamer build didn't allow for such big buffers.
- The limited buffer size on my system is again the reason why I had to add a third queue element to the middle of the audio as well as of the video part of the pipeline to prevent jerking.
- In many HOWTOs you find ffmpegcolorspace instead of cogcolorspace. You can even use this but cogcolorspace is much faster.
- It seems to be important that the video/x-raw-yuv,width=720,height=576,framerate=25/1,interlaced=true,aspect-ratio=4/3-statement is after videorate as videorate seems to drop the aspect-ratio-metadata otherwise resulting in files with aspect-ratio 1 in theis headers. Those files are probably played back warped and programs like dvdauthor complain.
Ready-made scripts
Although no two use cases are the same, it can be useful to see scripts used by other people. These can fill in blanks and provide inspiration for your own work.
Bash script to record video tapes with GStreamer (work-in-progress)
Note: as of August 2015, this script is still being fine-tuned. Come back in a month or two to see the final version.
#!/bin/bash # # GStreamer lets you build a pipeline (a DAG of elements) to process audio and video. # # At the time of writing, both the 0.1 and 1.0 series were installed by default. # So far as I can tell, the 1.0 series has some kind of bug that breaks TV-recording utterly # (possibly a bug in selecting the output formats from v4l2src) # If the 1.0 series gets fixed, you should only need to change a few commands here and there # (see http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/chapter-porting-1.0.html) # # We also use `v4l2-ctl` from the v4l-utils package to set the input source, # and `sox` (from the `sox` package) to edit the audio # # Approximate system requirements for maximum quality settings (smaller images and lower bitrates need less): # * about 10GB free for every hour of recording (6-7GB for temporary files, 3-4GB for the output file) # * 3GHz processor (preferably at least dual core, so other processes don't steal the encoder's CPU time) # * about 2GB of memory for every hour of recording (the second encoding pass needs to see the whole file) HELP_MESSAGE="Usage: $0 --init $0 --record <directory> $0 --kill <directory> <timeout> $0 --process <directory> $0 --clean <directory> Record a video into a directory (one directory per video). --init create an initial ~/.gstreamer-record-scriptrc please edit this file before your first recording --record create a first-pass recording in the specified directory --kill stop the recording in the specified directory after a specific amount time see \`man sleep\` for details about allowed time formats --process build the final recording in the specified directory make sure to edit \`.gstreamer-record-scriptrc\` in that directory first --clean delete temporary files " CONFIGURATION='# # CONFIGURATION FOR GSTREAMER RECORD SCRIPT # For more information, see http://www.linuxtv.org/wiki/index.php/GStreamer # # # VARIABLES YOU NEED TO EDIT # Every system and every use case is slightly different. # Here are the things you will probably need to change: # # Set these based on your hardware/location: VIDEO_DEVICE=${VIDEO_DEVICE:-/dev/video0} # `ls /dev/video*` for a list AUDIO_DEVICE=${AUDIO_DEVICE:-hw:CARD=SAA7134,DEV=0} # `arecord -L` for a list NORM=${NORM:-PAL} # (search Wikipedia for the exact norm in your country) VIDEO_KBITRATE="${VIDEO_KBITRATE:-8000}" # test for yourself, but 8000 seems to produce a high quality result (we use bitrate/1000 for readability and to help calculate the values below) AUDIO_BITRATE="${AUDIO_BITRATE:-32000}" # only bitrate supported by SAA7134 drivers - do `arecord -D $AUDIO_DEVICE --dump-hw-params -d 1 /dev/null` to see what your device supports VIDEO_INPUT="${VIDEO_INPUT:-1}" # composite input - `v4l2-ctl --device=$VIDEO_DEVICE --list-inputs` for a list # PAL video is approximately 720x576 resolution. VHS tapes have about half the horizontal quality, but this post convinced me to encode at 720x576 anyway: # http://forum.videohelp.com/threads/215570-Sensible-resolution-for-VHS-captures?p=1244415#post1244415 ASPECT_W="${ASPECT_W:-5}" ASPECT_H="${ASPECT_H:-4}" SIZE_MULTIPLIER="${SIZE_MULTIPLIER:-144}" # common multipliers include 144 (720x576 - PAL), 128 (640x480 - VGA) and 72 (360x288 - half PAL). Set this lower to reduce CPU usage # GStreamer automatically keeps audio and video in sync, but most systems start recording audio shortly before video video. # If your system has this problem... # # 1. run the first pass of AVI recording # 2. watch the video in your favourite video player # 3. adjust the audio delay until the video looks right # 4. pass the relevant number to the second pass # 5. if you plan to do several recordings in one session, you can set the following default value # # Note: you will have an opportunity to set the audio delay for a specific file later AUDIO_DELAY="${AUDIO_DELAY:-0.3}" # Some VCRs consistently run slightly fast or slow. If you suspect your VCR has this problem... # # Do a quick test: # 1. Run this command: gst-launch-0.10 v4l2src ! fpsdisplaysink fps-update-interval=1000 # * this will measure your average frame rate every second. After a few seconds, it should say "drop rate 25.00" # 2. Change "FRAMERATE" below to your actual frame rate (e.g. 2502/100 if your frame rate is 25.02 FPS) # # Or if you want to be precise: # 1. Run this command: gst-launch-0.10 v4l2src ! fpsdisplaysink fps-update-interval=100000 # * this will measure your average frame rate every 100 seconds (you can try different intervals if you like) # 2. wait 100 seconds, then record the number of frames dropped # 3. wait another 100 seconds, then record the number of frames dropped again # 4. calculate (result of step 4) - (result of step 3) - 1 # * e.g. 5007 - 2504 - 1 == 2502 # * you need to subtract one because fpsdisplaysink drops one frame every time it displays the counter # 5. Change "FRAMERATE" below to (result of step 4)/100 (e.g. 2502/100 if 2502 frames were dropped) FRAMERATE="${FRAMERATE:-2500/100}" # # VARIABLES YOU MIGHT NEED TO EDIT # These are defined in the script, but you can override them here if you need non-default values: # # set this to 1 to get lots of debugging data (including DOT graphs of your pipelines): #DEBUG_MODE= # Set these to alter the recording quality: #GST_MPEG4_OPTS="..." #GST_MPEG4_OPTS_PASS1="..." #GST_MPEG4_OPTS_PASS2="..." #GST_LAME_OPTS="..." # Set these to control the audio/video pipelines: #GST_QUEUE="..." #GST_VIDEO_SRC="..." #GST_AUDIO_SRC="..." ' # # CONFIGURATION SECTION # CONFIG_SCRIPT="$HOME/.gstreamer-record-scriptrc" [ -e "$CONFIG_SCRIPT" ] && source "$CONFIG_SCRIPT" source <( echo "$CONFIGURATION" ) # set this to 1 to get lots of debugging data (including DOT graphs of your pipelines): DEBUG_MODE="${DEBUG_MODE:-}" # `gst-inspect` has more information here too: GST_MPEG4_OPTS="${GST_MPEG4_OPTS:-interlaced=true bitrate=$(( VIDEO_KBITRATE * 1000 )) max-key-interval=15}" GST_MPEG4_OPTS_PASS1="${GST_MPEG4_OPTS_PASS1:-rc-buffer-size=$(( VIDEO_KBITRATE * 4000 )) rc-max-rate=$(( VIDEO_KBITRATE * 2000 )) rc-min-rate=$(( VIDEO_KBITRATE * 875 )) pass=pass1 $GST_MPEG4_OPTS}" # pictures of white noise will max out your bitrate - setting min/max bitrates ensures the video after a period of snow will be reasonable quality GST_MPEG4_OPTS_PASS2="${GST_MPEG4_OPTS_PASS2:-pass=pass2 $GST_MPEG4_OPTS}" # pictures of white noise will max out your bitrate - setting min/max bitrates ensures the video after a period of snow will be reasonable quality GST_LAME_OPTS="${GST_LAME_OPTS:-quality=0}" # `gst-inspect-0.10 <element> | less -i` for a list of properties (e.g. `gst-inspect-0.10 v4l2src | less -i`): GST_QUEUE="${GST_QUEUE:-queue max-size-buffers=0 max-size-time=0 max-size-bytes=0}" GST_VIDEO_FORMAT="${GST_VIDEO_FORMAT:-video/x-raw-yuv,width=$(( ASPECT_W * SIZE_MULTIPLIER )),height=$(( ASPECT_H * SIZE_MULTIPLIER )),framerate=$FRAMERATE,interlaced=true,aspect-ratio=$ASPECT_W/$ASPECT_H}" GST_AUDIO_FORMAT="${GST_AUDIO_FORMAT:-audio/x-raw-int,channels=2,rate=$AUDIO_BITRATE,depth=16}" GST_VIDEO_SRC="${GST_VIDEO_SRC:-v4l2src device=$VIDEO_DEVICE do-timestamp=true norm=$NORM ! $GST_QUEUE ! videorate silent=false ! $GST_VIDEO_FORMAT}" GST_AUDIO_SRC="${GST_AUDIO_SRC:-alsasrc device=$AUDIO_DEVICE do-timestamp=true ! $GST_QUEUE ! audioconvert ! audiorate silent=false ! $GST_AUDIO_FORMAT}" # # MAIN LOOP # You should only need to edit this if you're making significant changes to the way the script works # echo_bold() { echo -e "\e[1m$@\e[0m" } set_directory() { if [ -z "$1" ] then echo "$HELP_MESSAGE" exit 1 else DIRECTORY="$( readlink -f "$1" )" FILE="$DIRECTORY/gstreamer-recording" URI="file://$( echo "$FILE" | sed -e 's/ /%20/g' )" mkdir -p -- "$DIRECTORY" || exit GST_CMD="gst-launch-0.10" if [ -n "$DEBUG_MODE" ] then export GST_DEBUG_DUMP_DOT_DIR="$DIRECTORY/graphs" if [ -d "$GST_DEBUG_DUMP_DOT_DIR" ] then rm -f "$GST_DEBUG_DUMP_DOT_DIR"/*.dot else mkdir "$GST_DEBUG_DUMP_DOT_DIR" fi GST_CMD="$GST_CMD -v --gst-debug=2" fi fi } case "$1" in -i|--i|--in|--ini|--init) if [ -e "$CONFIG_SCRIPT" ] then echo "Please delete $CONFIG_SCRIPT if you want to recreate it" else echo "$CONFIGURATION" > "$CONFIG_SCRIPT" echo "Please edit $CONFIG_SCRIPT to match your system" fi ;; -r|--r|--re|--rec|--reco|--recor|--record) # Build a pipeline with sources being encoded as MPEG4 video and FLAC audio, then being muxed into a Matroska container. # FLAC and Matroska are used during encoding to ensure we don't lose much data between passes set_directory "$2" if [ -e "$FILE-temp.mkv" ] then echo "Please delete the old $FILE-temp.mkv before making a new recording" exit 1 fi v4l2-ctl --device="$VIDEO_DEVICE" --set-input $VIDEO_INPUT echo_bold "Press ctrl+c to finish recording" sudo nice -20 sh -c "echo \$\$ > '$FILE-temp.pid' && exec $GST_CMD -e \ $GST_VIDEO_SRC ! ffenc_mpeg4 $GST_MPEG4_OPTS_PASS1 'multipass-cache-file=$FILE-temp.log' ! $GST_QUEUE ! mux. \ $GST_AUDIO_SRC ! flacenc ! $GST_QUEUE ! mux. \ matroskamux name=mux ! filesink location='$FILE-temp.mkv'" echo_bold extracting audio... $GST_CMD -q \ uridecodebin uri="$URI-temp.mkv" \ ! progressreport \ ! audioconvert \ ! audiorate \ ! wavenc \ ! filesink location="$FILE-temp.wav" \ | while read ; do echo -n "$REPLY"$'\r'; done echo cat <<EOF > "$FILE.conf" # # NOISE REDUCTION (optional) # # To reduce noise in the final stream, identify a period in the recording # which only has background noise (a second or two should be enough) # # If you want to reduce noise, set these two variables to the start and # duration of the noise period (in seconds): # NOISE_START= NOISE_DURATION= # # AUDIO DELAY # # To add a period of silence at the beginning of the video, watch the .mkv # file and decide how much silence you want. # # If you want to add a delay, set this variable to the duration in seconds # (can be fractional): # AUDIO_DELAY=$AUDIO_DELAY # Uncomment the following when you've finished editing # (it's just here to prevent silly mistakes) # Then re-run the script with --process #CONFIG_FILE_DONE=done EOF cat <<EOF Now please check $FILE-temp.mkv If you've recorded the wrong thing, delete it and start again Otherwise edit $FILE.conf and set any variables you want EOF ;; -k|--k|--ki|--kil|--kill) set_directory "$2" sudo sh -c "sleep '$3' && kill -s 2 '$(< "$FILE-temp.pid" )'" ;; -p|--p|--pr|--pro|--proc|--proce|--proces|--process) set_directory "$2" [ -e "$FILE.conf" ] && source "$FILE.conf" if [ -z "$CONFIG_FILE_DONE" ] then echo "Please edit $FILE.conf before processing the file" exit 1 fi if [ -e "$FILE.avi" ] then echo "Please delete the old $FILE.avi before making a new recording" exit 1 fi AUDIO_FILE="$FILE-temp.flac" # Note: to make this script support a system where the video is ahead of the audio, # change the `sox` commands below to `trim` the source audio instead of inserting silence before it # Prepend $AUDIO_DELAY seconds of silence to the audio, calculate a noise profile, and reduce noise based on that profile if [ -n "$NOISE_START" -a -n "$AUDIO_DELAY" ] then echo_bold "improving audio..." sox -S \ -t wav <( sox -V1 -n -r $AUDIO_BITRATE -c 2 -t wav - trim 0.0 $AUDIO_DELAY ) "$FILE-temp.wav" "$FILE-temp.flac" \ noisered <( sox -V1 "$FILE-temp.wav" -t wav - trim "$NOISE_START" "$NOISE_DURATION" | sox -t wav - -n noiseprof - ) 0.21 elif [ -n "$NOISE_START" ] then echo_bold "improving audio..." sox -S \ "$FILE-temp.wav" "$FILE-temp.flac" \ noisered <( sox -V1 "$FILE-temp.wav" -t wav - trim "$NOISE_START" "$NOISE_DURATION" | sox -t wav - -n noiseprof - ) 0.21 elif [ -n "$AUDIO_DELAY" ] then echo_bold "improving audio..." sox -S \ -t wav <( sox -V1 -n -r $AUDIO_BITRATE -c 2 -t wav - trim 0.0 $AUDIO_DELAY ) "$FILE-temp.wav" "$FILE-temp.flac" else AUDIO_FILE="$FILE-temp.wav" fi # Do a second pass over the video (shrinking the file size), and replace the audio with the improved version: echo_bold "building final file..." nice -n +20 $GST_CMD -q \ uridecodebin uri="$URI-temp.mkv" name=video \ uridecodebin uri="file://$AUDIO_FILE" name=audio \ video. ! progressreport ! deinterlace ! ffenc_mpeg4 $GST_MPEG4_OPTS_PASS2 "multipass-cache-file=$FILE-temp.log" ! $GST_QUEUE ! mux.video_0 \ audio. ! audioconvert ! audiorate ! lamemp3enc $GST_LAME_OPTS ! $GST_QUEUE ! mux.audio_0 \ avimux name=mux ! filesink location="$FILE.avi" echo_bold "Saved to $FILE.avi" ;; -c|--c|--cl|--clea|--clean) rm -f "$2"-temp.* ;; *) echo "$HELP_MESSAGE" esac
This script generates a video in two passes: first it records and builds statistics, then lets you analyse the output, then builds an optimised final version.
Bash script to record video tapes with entrans
#!/bin/bash targetdirectory="~/videos" # Test ob doppelt geöffnet if [[ -e "~/.lock_shutdown.digitalisieren" ]]; then echo "" echo "" echo "Capturing already running. It is impossible to capture to tapes simultaneously. Hit a key to abort." read -n 1 exit fi # trap keyboard interrupt (control-c) trap control_c 0 SIGHUP SIGINT SIGQUIT SIGABRT SIGKILL SIGALRM SIGSEGV SIGTERM control_c() # run if user hits control-c { cleanup exit $? } cleanup() { rm ~/.lock_shutdown.digitalisieren return $? } touch "~/.lock_shutdown.digitalisieren" echo "" echo "" echo "Please enter the length of the tape in minutes and press ENTER. (Press Ctrl+C to abort.)" echo "" while read -e laenge; do if [[ $laenge == [0-9]* ]]; then break 2 else echo "" echo "" echo "That's not a number." echo "Please enter the length of the tape in minutes and press ENTER. (Press Ctrl+C to abort.)" echo "" fi done let laenge=laenge+10 # Sicherheitsaufschlag, falls Band doch länger let laenge=laenge*60 echo "" echo "" echo "Please type in the description of the tape." echo "Don't forget to rewind the tape?" echo "Hit ENTER to start capturing. Press Ctrl+C to abort." echo "" read -e name; name=${name//\//_} name=${name//\"/_} name=${name//:/_} # Falls Name schon vorhanden if [[ -e "$targetdirectory/$name.mpg" ]]; then nummer=0 while [[ -e "$targetdirectory/$name.$nummer.mpg" ]]; do let nummer=nummer+1 done name=$name.$nummer fi # Audioeinstellungen setzen: unmuten, Regler amixer -D pulse cset name='Capture Switch' 1 >& /dev/null # Aufnahme-Kanal einschalten amixer -D pulse cset name='Capture Volume' 20724 >& /dev/null # Aufnahme-Pegel einstellen # Videoinput auswählen und Karte einstellen v4l2-ctl --set-input 3 >& /dev/null v4l2-ctl -c saturation=80 >& /dev/null v4l2-ctl -c brightness=130 >& /dev/null let ende=$(date +%s)+laenge echo "" echo "Working" echo "Capturing will be finished at "$(date -d @$ende +%H.%M)"." echo "" echo "Press Ctrl+C to finish capturing now." nice -n -10 entrans -s cut-time -c 0-$laenge -m --dam -- --raw \ v4l2src queue-size=16 do-timestamp=true device=$VIDEO_DEVICE norm=PAL-BG num-buffers=-1 ! stamp sync-margin=2 sync-interval=5 silent=false progress=0 ! \ queue leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! dam ! \ cogcolorspace ! videorate ! \ 'video/x-raw-yuv,width=720,height=576,framerate=25/1,interlaced=true,aspect-ratio=4/3' ! \ queue leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! \ ffenc_mpeg2video rc-buffer-size=1500000 rc-max-rate=7000000 rc-min-rate=3500000 bitrate=4000000 max-key-interval=15 pass=pass1 ! \ queue leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! mux. \ pulsesrc buffer-time=2000000 do-timestamp=true ! \ queue leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! dam ! \ audioconvert ! audiorate ! \ audio/x-raw-int,rate=48000,channels=2,depth=16 ! \ queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! \ ffenc_mp2 bitrate=192000 ! \ queue leaky=2 max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! mux. \ ffmux_mpeg name=mux ! filesink location=\"$targetdirectory/$name.mpg\" >& /dev/null echo "Finished Capturing" rm ~/.lock_shutdown.digitalisieren
The script uses a command line similar to this to produce a DVD compliant MPEG2 file.
- The script aborts if another instance is already running.
- If not it asks for the length of the tape and its description
- It records to description.mpg or if this file already exists to description.0.mpg and so on for the given time plus 10 minutes. The target-directory has to be specified in the beginning of the script.
- As setting of the inputs and settings of the capture device is only partly possible via GStreamer other tools are used.
- Adjust the settings to match your input sources, the recording volume, capturing saturation and so on.
Further documentation resources
- Gstreamer project
- FAQ
- Documentation
- man gst-launch
- entrans command line tool documentation
- gst-inspect plugin-name