V4L capturing: Difference between revisions

From LinuxTVWiki
Jump to navigation Jump to search
(Another pass on the rewrite)
Line 3: Line 3:
== Overview ==
== Overview ==


Analogue video technology was largely designed before the advent of computers, so accurately digitising a video is a difficult problem. For example, software often assumes a constant frame rate throughout a video, but analogue technologies can deliver different numbers of frames from second to second for various reasons. This page will discuss some of the problems you will encounter digitising video and some of the techniques and programs you can use to solve them.
Analogue video technology was largely designed before the advent of computers, so accurately digitising a video is a difficult problem. For example, software often assumes a constant frame rate throughout a video, but analogue technologies can deliver different numbers of frames from second to second. This page will present a framework for recording video, which you can alter for your specific requirements.


=== Recommended process ===
=== Recommended process ===
Line 14: Line 14:
# '''Try the video and transcode again''' - check whether the video works how you want, then transcode again
# '''Try the video and transcode again''' - check whether the video works how you want, then transcode again


Converting analogue input to a digital format is hard - VCRs overheat and damage tapes, computers use too much CPU and drop frames, disk drives fill up, etc. Creating a ''good'' digital video is also hard - not all software supports all formats, overscan and background hiss distract the viewer, videos need to be split into useful chunks, and so on. It's much easier to learn the process and produce a quality result if you tackle one problem at a time.
Converting analogue input to a digital format is hard - VCRs overheat and damage tapes, computers use too much CPU and drop frames, disk drives fill up, etc. Creating a ''good'' digital video is also hard - not all software supports all formats, overscan and background hiss distract the viewer, videos need to be split into useful chunks, and so on. It's much easier to learn the process and produce a quality result if you tackle ''encoding'' in one step and ''transcoding'' in another.

=== Suggested software ===

This page assumes you have installed the following programs:

* [[GStreamer|gst-launch-1.0]] for capturing video (probably part of the ''gstreamer1.0-tools'' package)
* [http://ffmpeg.org/ FFMpeg] for saving and editing video (probably part of the ''ffmpeg'' package)
* [http://git.linuxtv.org/v4l-utils.git v4l-ctl] for controlling your video card (probably part of the ''v4l-utils'' package)
* [http://mpv.io mpv] for viewing videos (probably part of the ''mpv'' package)

There are alternatives for each of these (e.g. the older ''0.10'' series of [[GStreamer]] and the ''libav'' fork of [http://ffmpeg.org/ FFMpeg]). You should be able to modify the instructions below to suit your preferences.


=== Choosing formats ===
=== Choosing formats ===


When you create a video, you need to choose your ''video format'' (e.g. XviD or MPEG-2), ''audio format'' (e.g. WAV or MP3) and ''container format'' (e.g. AVI or MP4). There's constant work to improve the ''codecs'' that create audio/video and the ''muxers'' that create containers, and whole new formats are invented fairly regularly, so this page can't recommend any specific formats. For example, as of late 2015 [https://en.wikipedia.org/wiki/MPEG-2 MPEG-2] was the recommended video codec for backwards compatibility because it was supported by older DVD players, [https://en.wikipedia.org/wiki/H.264/MPEG-4_AVC H.624] was becoming popular because support was starting to land in recent web browsers, and [https://en.wikipedia.org/wiki/High_Efficiency_Video_Coding HEVC] wasn't yet widely supported because people were waiting to see if patent claims would be made against it. Each solution is better for different use cases, and better solutions will most likely have been created within a year.
When you create a video, you need to choose your ''video format'' (e.g. XviD or MPEG-2), ''audio format'' (e.g. WAV or MP3) and ''container format'' (e.g. AVI or MP4). There's constant work to improve the ''codecs'' that create audio/video and the ''muxers'' that create containers, and whole new formats are invented fairly regularly, so this page can't recommend any specific formats. For example, as of late 2015 [https://en.wikipedia.org/wiki/MPEG-2 MPEG-2] is the most widely supported by older DVD players, [https://en.wikipedia.org/wiki/H.264/MPEG-4_AVC H.624] is becoming a de facto standard in modern web browsers, and people are waiting to see whether [https://en.wikipedia.org/wiki/High_Efficiency_Video_Coding HEVC] will be blocked by patent trolls. That's probably enough to decide which video codec is right for you in late 2015, but the facts will have changed even by early 2016.


You'll need to do some research to find the currently-recommended formats. Wikipedia's comparisons of [https://en.wikipedia.org/wiki/Comparison_of_audio_coding_formats audio], [https://en.wikipedia.org/wiki/Comparison_of_video_codecs video] and [https://en.wikipedia.org/wiki/Comparison_of_container_formats container] formats are a good place to start. Here are some important things to look for:
You'll need to do some research to find the currently-recommended formats. Wikipedia's comparisons of [https://en.wikipedia.org/wiki/Comparison_of_audio_coding_formats audio], [https://en.wikipedia.org/wiki/Comparison_of_video_codecs video] and [https://en.wikipedia.org/wiki/Comparison_of_container_formats container] formats are a good place to start. Here are some important things to look for:
Line 37: Line 48:
* video device (<code>/dev/video''<number>''</code>)
* video device (<code>/dev/video''<number>''</code>)
* audio device (<code>hw:CARD=''<id>'',DEV=''<number>''</code>)
* audio device (<code>hw:CARD=''<id>'',DEV=''<number>''</code>)
* video capabilities (<code>video/x-raw, format=''<string>'', framerate=''<fraction>'', width=''<int>'', height=''<int>'', interlace-mode=''<string>'', pixel-aspect-ratio=''<fraction>''</code>)
* video capabilities (<code>video/x-raw, format=UYVY, framerate=''<fraction>'', width=''<int>'', height=''<int>'')
* audio capabilities (<code>audio/x-raw, format=''<string>'', layout=''<string>'', rate=''<int>'', channels=''<int>''</code>)
* audio capabilities (<code>audio/x-raw, rate=''<int>'', channels=''<int>''</code>)
* colour settings (optional - hue, saturation, brightness and contrast)
* colour settings (optional - hue, saturation, brightness and contrast)


Line 99: Line 110:
=== Determining your audio device ===
=== Determining your audio device ===


See all of our audio devices by doing:
See all of your audio devices by doing:


arecord -l
arecord -l
Line 127: Line 138:
gst-launch-1.0 --gst-debug=alsa:5 alsasrc device=$AUDIO_DEVICE ! fakesink 2>&1 | sed -une '/returning caps/ s/[s;] /\n/gp'
gst-launch-1.0 --gst-debug=alsa:5 alsasrc device=$AUDIO_DEVICE ! fakesink 2>&1 | sed -une '/returning caps/ s/[s;] /\n/gp'


You will need to press <kbd>ctrl+c</kbd> to close each of these programs when they've printed some output. When you record your video, you will need to specify capabilities based on the ranges displayed here.
You will need to press <kbd>ctrl+c</kbd> to close each of these programs when they've printed some output. When you record your video, you will need to specify capabilities based on the ranges displayed here. Some things to remember:

For options where you have a choice, you should usually just pick the highest number with the following exceptions:


* audio <code>format</code> is optional (your software can decide this automatically)
* audio <code>format</code> is optional (your software can decide this automatically)
* video <code>format</code> should be optional, but as of 2015 a bug means you need to specify <code>format=UYVY</code>
* video <code>format</code> should be optional, but as of 2015 a bug means you should specify <code>format=UYVY</code>
* video <code>height</code> (discussed below) should be the appropriate height for your TV norm
* video <code>height</code> (discussed below) should be the appropriate height for your TV norm
* video <code>framerate</code> (discussed below) should be the appropriate value for your TV norm, but may need to be tweaked for your hardware
* video <code>framerate</code> (discussed below) should be the appropriate value for your TV norm, but may need to be tweaked for your hardware
* <code>pixel-aspect-ratio</code> should be ignored (it will be set later)
* <code>pixel-aspect-ratio</code> will be set below - do not specify it here
* for all other capabilities, just pick the highest number (or delete it altogether if there's only one choice)


For example, if your TV norm was some variant of PAL and your video card showed these results:
For example, if your TV norm was some variant of PAL and your video card showed these results:
Line 159: Line 169:
audio/x-raw, format=(string){ S16LE, U16LE }, layout=(string)interleaved, rate=(int)32000, channels=(int)1</nowiki>
audio/x-raw, format=(string){ S16LE, U16LE }, layout=(string)interleaved, rate=(int)32000, channels=(int)1</nowiki>


Then you would select <code>video/x-raw, format=UYVY, framerate=25/1, width=720, height=576, interlace-mode=mixed, pixel-aspect-ratio=1</code> and <code>audio/x-raw, layout=interleaved, rate=32000, channels=2, channel-mask=0x0000000000000003</code>
Then you would select <code>video/x-raw, format=UYVY, framerate=25/1, width=720, height=576</code> and <code>audio/x-raw, rate=32000, channels=2</code>


Once again, you can set your capabilities in an environment variable:
Once again, you can set your capabilities in an environment variable, but you will need to put quote marks around them:


VIDEO_CAPABILITIES=<capabilities>
VIDEO_CAPABILITIES="<capabilities>"
AUDIO_CAPABILITIES=<capabilities>
AUDIO_CAPABILITIES="<capabilities>"


Further examples on this page will use <CODE>$VIDEO_CAPABILITIES</CODE> and <code>$AUDIO_CAPABILITIES</code> in place of an actual capabilities - you will need to replace this if you don't set environment variables.
For example, <code>AUDIO_CAPABILITIES="audio/x-raw, rate=32000, channels=2". Further examples on this page will use <CODE>$VIDEO_CAPABILITIES</CODE> and <code>$AUDIO_CAPABILITIES</code> in place of actual capabilities - you will need to replace these if you don't set environment variables.


==== Video heights ====
==== Video heights ====
Line 229: Line 239:
Your first step should be to record an accurate copy of your source video. A good quality encoding can use anything up to 30 gigabytes per hour, so figure out how long your video is and make sure you have enough space. Most software isn't optimised for analogue video encoding, causing audio and video to desynchronise in some circumstances.
Your first step should be to record an accurate copy of your source video. A good quality encoding can use anything up to 30 gigabytes per hour, so figure out how long your video is and make sure you have enough space. Most software isn't optimised for analogue video encoding, causing audio and video to desynchronise in some circumstances.


As well as the values above, you will need to decide the following (preferably storing them as environment variables):
[[GStreamer]] is the best program for inputting video on Linux (see [[GStreamer|the GStreamer page]] for details), but is poorly-documented and hard to use. [http://ffmpeg.org/ FFmpeg] has much better documentation, but can't handle the quirks of analogue video. To get the best results, you'll need to use GStreamer as an FFmpeg source. Assuming you have set the environment variables from the previous section, and also set <code>$VIDEO_FORMAT</code>, <code>$VIDEO_FORMAT_OPTIONS</code>, <code>$AUDIO_FORMAT</code>, <code>$AUDIO_FORMAT_OPTIONS</code> and <code>$MUXER_FORMAT</code> based on your preferences, you can record video with a command like this:

* <code>ENCODING_VIDEO_FORMAT</code> - the format you chose to encode the accurate copy of your video (see <code>ffmpeg -encoders</code> for a list)
* <code>ENCODING_AUDIO_FORMAT</code> - the format you chose to encode the accurate copy of your audio (see <code>ffmpeg -encoders</code> for a list)
* <code>ENCODING_MUXER_FORMAT</code> - the format you chose to mux your videos together (see <code>ffmpeg -formats</code> for a list)
* <code>ENCODING_VIDEO_OPTIONS</code> - settings for your video format (see <code>ffmpeg --help encoder=$ENCODING_VIDEO_FORMAT</code> for a list)
* <code>ENCODING_AUDIO_OPTIONS</code> - settings for your audio format (see <code>ffmpeg --help encoder=$ENCODING_AUDIO_FORMAT</code> for a list)
* <code>ENCODING_MUXER_OPTIONS</code> - settings for your muxer format (see <code>ffmpeg --help muxer=$ENCODING_MUXER_FORMAT</code> for a list)
* <code>ENCODING_FILENAME</code> - your preferred filename (see <code>ffmpeg --help muxer=$ENCODING_MUXER_FORMAT</code> for suggested extensions)

[[GStreamer]] is the best program for inputting video on Linux (see [[GStreamer|GStreamer]] for details), but is poorly-documented and hard to use. [http://ffmpeg.org/ FFmpeg] has much better documentation, but can't handle the quirks of analogue video. To get the best results, you'll need to use GStreamer as an FFmpeg source:


<nowiki>ffmpeg \
<nowiki>ffmpeg \
Line 238: Line 258:
matroskamux name=mux ! fdsink fd=1
matroskamux name=mux ! fdsink fd=1
) \
) \
-c:v $VIDEO_FORMAT $VIDEO_FORMAT_OPTIONS \
-c:v $ENCODING_VIDEO_FORMAT $ENCODING_VIDEO_OPTIONS \
-c:a $AUDIO_FORMAT $AUDIO_FORMAT_OPTIONS \
-c:a $ENCODING_AUDIO_FORMAT $ENCODING_AUDIO_OPTIONS \
-f $ENCODING_MUXER_FORMAT $ENCODING_MUXER_OPTIONS \
"accurate-video.$MUXER_FORMAT"</nowiki>
"$ENCODING_FILENAME"</nowiki>


This command does two things:
This command does two things:
* tells GStreamer to record raw audio and video and mux it into a [http://www.matroska.org/ Matroska media container], which represents the input in a form that's accurate and which FFmpeg can handle
* tells GStreamer to record raw audio and video use a [http://www.matroska.org/ Matroska media container] to communicate with FFmpeg
* tells FFmpeg to accept the video from GStreamer and convert it to a more usable format
* tells FFmpeg to accept the video from GStreamer and encode it using the settings you specified (e.g. remuxing to your preferred container format)


If you have enough free disk space, you could just save the raw video as your accurate copy - see [[GStreamer]] for details.
If you have enough free disk space, you could just save the raw video as your accurate copy - see [[GStreamer]] for details.
Line 254: Line 275:
If you need to resync audio and video during transcoding, you can make your life easier by creating [https://en.wikipedia.org/wiki/Clapperboard clapperboard] effects at the start of your videos - hook up a camcorder, run your capture command, then clap your hands in front of the camera before pressing play on your VCR. Failing that, make note of any moments where an obvious visual element occurred at the same moment as an obvious audio element (such as a cup being placed on a table).
If you need to resync audio and video during transcoding, you can make your life easier by creating [https://en.wikipedia.org/wiki/Clapperboard clapperboard] effects at the start of your videos - hook up a camcorder, run your capture command, then clap your hands in front of the camera before pressing play on your VCR. Failing that, make note of any moments where an obvious visual element occurred at the same moment as an obvious audio element (such as a cup being placed on a table).


Once you've recorded your video, you'll need to calculate your desired A/V offset. For the best result, play your video with precise timestamps (e.g. <code>mpv --osd-fractions your-file.$MUXER_FORMAT</code>) and open your audio in an audio editor (e.g. [http://audacityteam.org/ Audacity]), then find the exact frame when your clapperboard video/audio occurred and subtract one from the other. To confirm your result, run <code>mpv --audio-delay=<result> accurate-video.$MUXER_FORMAT</code> and make sure it looks right.
Once you've recorded your video, you'll need to calculate your desired A/V offset. For the best result, play your video with precise timestamps (e.g. <code>mpv --osd-fractions "$ENCODING_FILENAME"</code>) and open your audio in an audio editor (e.g. [http://audacityteam.org/ Audacity]), then find the exact frame when your clapperboard video/audio occurred and subtract one from the other. To confirm your result, run <code>mpv --audio-delay=<result> "$ENCODING_FILENAME"</code> and make sure it looks right.


=== Measuring audio noise ===
=== Measuring audio noise ===
Line 270: Line 291:
The video you recorded should accurately represent your source video, but will probably be a large file, be a noisy experience, and might not even play in some programs. You need to ''transcode'' it to a more usable format. You can use any program(s) to do this, but it's probably easiest to continue using [http://ffmpeg.org/ FFmpeg]:
The video you recorded should accurately represent your source video, but will probably be a large file, be a noisy experience, and might not even play in some programs. You need to ''transcode'' it to a more usable format. You can use any program(s) to do this, but it's probably easiest to continue using [http://ffmpeg.org/ FFmpeg]:


<nowiki>ffmpeg -i "accurate-video.$MUXER_FORMAT" \
<nowiki>ffmpeg -i "$ENCODING_FILENAME" \
-c:v <transcoded-video-format> <transcoded-video-options> \
-c:v <transcoded-video-format> <transcoded-video-options> \
-c:a <transcoded-audio-format> <transcoded-audio-options> \
-c:a <transcoded-audio-format> <transcoded-audio-options> \
usable-video.<usable-muxer></nowiki>
-f <transcoded-muxer-format> <transcoded-muxer-options> \
<transcoded-filename></nowiki>


If you're happy with the result, you can stop here. But you might want to improve the video, for example:
If you're happy with the result, you can stop here. But you might want to improve the video, for example:
Line 312: Line 334:
Much like audio, you can spend as long as you like cleaning your video. But whereas audio cleaning tends to be about doing one thing really well (separating out frequencies of signal and noise), video cleaning tends to be about getting decent results in different circumstances. For example, you might want to just remove the overscan lines at the bottom of a VHS recording, denoise a video slightly to reduce file size, or aggressively remove grains to make a low-quality recording watchable. [https://ffmpeg.org/ffmpeg-filters.html FFmpeg's video filter list] is a good place to start, but here are a few things you should know.
Much like audio, you can spend as long as you like cleaning your video. But whereas audio cleaning tends to be about doing one thing really well (separating out frequencies of signal and noise), video cleaning tends to be about getting decent results in different circumstances. For example, you might want to just remove the overscan lines at the bottom of a VHS recording, denoise a video slightly to reduce file size, or aggressively remove grains to make a low-quality recording watchable. [https://ffmpeg.org/ffmpeg-filters.html FFmpeg's video filter list] is a good place to start, but here are a few things you should know.


Some programs need video to have a specified aspect ratio. If you simply crop out the ugly overscan lines at the bottom of your video, some programs may refuse to play your video. Instead you should ''mask'' the area with blackness. In <code>ffmpeg</code>, you would use a <code>crop</code> filter followed by a <code>pad</code> filter to create the appropriate result.
Some programs need video to have a specified aspect ratio. If you simply crop out the ugly overscan lines at the bottom of your video, some programs may refuse to play your video. Instead you should ''mask'' the area with blackness. In <code>ffmpeg</code>, you would use a <code>crop</code> filter to remove the overscan followed by a <code>pad</code> filter to put the image back to its original height.


Analogue video is [https://en.wikipedia.org/wiki/Interlaced_video interlaced], essentially interleaving two consecutive video frames within each image. This confuses video filters that compare neighbouring pixels (e.g. to look for bright grains in dark areas of the screen), so you should ''deinterleave'' the frames before using such filters, then ''interleave'' them again afterwards. For example, an <code>ffmpeg</code> filter chain might start with <code>il=d:d:d</code> and end with <code>il=i:i:i</code>. If you skip the trailing <code>il=i:i:i</code>, you can see that de-interleaving works by putting each image in a different half of the frame to trick other filters into doing the right thing.
Analogue video is [https://en.wikipedia.org/wiki/Interlaced_video interlaced], essentially interleaving two consecutive video frames within each image. This confuses video filters that compare neighbouring pixels (e.g. to look for bright grains in dark areas of the screen), so you should ''deinterleave'' the frames before using such filters, then ''interleave'' them again afterwards. For example, an <code>ffmpeg</code> filter chain might start with <code>il=d:d:d</code> and end with <code>il=i:i:i</code>. If you skip the trailing <code>il=i:i:i</code>, you can see that de-interleaving works by putting each image in a different half of the frame to trick other filters into doing the right thing.
Line 319: Line 341:


Your transcoding format needs to be small and compatible with whatever software you will use to play it back. If you can't find accurate information about your players, create a short test video and try it on your system. Your video codec may well have options to reduce file size at the cost of encoding time, so you may want to leave your computer transcoding overnight to get the best file size.
Your transcoding format needs to be small and compatible with whatever software you will use to play it back. If you can't find accurate information about your players, create a short test video and try it on your system. Your video codec may well have options to reduce file size at the cost of encoding time, so you may want to leave your computer transcoding overnight to get the best file size.

[[Category:Software]]

Revision as of 22:40, 4 September 2015

This page discusses how to capture analogue video for offline consumption (especially digitising old VHS tapes). For information about streaming live video (e.g. webcams), see the streaming page. For information about digital video (DVB), see TV-related software.

Overview

Analogue video technology was largely designed before the advent of computers, so accurately digitising a video is a difficult problem. For example, software often assumes a constant frame rate throughout a video, but analogue technologies can deliver different numbers of frames from second to second. This page will present a framework for recording video, which you can alter for your specific requirements.

Recommended process

Your workflow should look something like this:

  1. Set your system up - understand the quirks of your TV card, VCR etc.
  2. Encode an accurate copy of the source video - handle issues with the analogue half of the system here. Do as little digital processing as possible
  3. Transcode a usable copy of the video - convert the previous file to something pleasing to use
  4. Try the video and transcode again - check whether the video works how you want, then transcode again

Converting analogue input to a digital format is hard - VCRs overheat and damage tapes, computers use too much CPU and drop frames, disk drives fill up, etc. Creating a good digital video is also hard - not all software supports all formats, overscan and background hiss distract the viewer, videos need to be split into useful chunks, and so on. It's much easier to learn the process and produce a quality result if you tackle encoding in one step and transcoding in another.

Suggested software

This page assumes you have installed the following programs:

  • gst-launch-1.0 for capturing video (probably part of the gstreamer1.0-tools package)
  • FFMpeg for saving and editing video (probably part of the ffmpeg package)
  • v4l-ctl for controlling your video card (probably part of the v4l-utils package)
  • mpv for viewing videos (probably part of the mpv package)

There are alternatives for each of these (e.g. the older 0.10 series of GStreamer and the libav fork of FFMpeg). You should be able to modify the instructions below to suit your preferences.

Choosing formats

When you create a video, you need to choose your video format (e.g. XviD or MPEG-2), audio format (e.g. WAV or MP3) and container format (e.g. AVI or MP4). There's constant work to improve the codecs that create audio/video and the muxers that create containers, and whole new formats are invented fairly regularly, so this page can't recommend any specific formats. For example, as of late 2015 MPEG-2 is the most widely supported by older DVD players, H.624 is becoming a de facto standard in modern web browsers, and people are waiting to see whether HEVC will be blocked by patent trolls. That's probably enough to decide which video codec is right for you in late 2015, but the facts will have changed even by early 2016.

You'll need to do some research to find the currently-recommended formats. Wikipedia's comparisons of audio, video and container formats are a good place to start. Here are some important things to look for:

  • encoding speed - during the encoding stage, using too much CPU load will cause frame-drops as the computer tries to keep up
  • accuracy - some formats are lossless, others throw away information to improve speed and/or reduce file size
  • file size - different formats use different amounts of disk space, even with the same accuracy
  • compatibility - newer formats usually produce better results but can't be played by older software

Remember that you can use different formats in the encode and transcode stages. Speed and accuracy are most important when encoding, so you should use a modern, fast, low-loss format to create your initial accurate copy of the source video. But size and compatibility are most important for playback, so you should transcode to a format that produces a smaller or more compatible file. For example, as of late 2015 you might encode FLAC audio and x264 video into a Matroska file, then transcode MP3 audio and MPEG-2 video into an AVI file. You can examine the result and transcode again from the original if the file is too large or your grandmother's DVD player won't play it.

Setting up

Before you can record a video, you need to set your system up and identify the following information:

  • connector type (RF, composite or S-video)
  • TV norm (some variant of PAL, NTSC or SECAM)
  • video device (/dev/video<number>)
  • audio device (hw:CARD=<id>,DEV=<number>)
  • video capabilities (video/x-raw, format=UYVY, framerate=<fraction>, width=<int>, height=<int>)
  • audio capabilities (audio/x-raw, rate=<int>, channels=<int>)
  • colour settings (optional - hue, saturation, brightness and contrast)

This section will explain how to find these.

Connecting your video

Rf-connector.png avoid RF Connector tends to create more noise than the alternatives. Usually input #0, shows snow when there's no input
Composite-video-connector.png use Composite video connector widely supported and produces a good signal. Usually input #1, shows blackness when there's no input
S-video-connector.png use if available S-video connector should produce a good video signal but most hardware needs a converter. Usually input #2, shows blackness when there's no input

Connect your video source (TV or VCR) to your computer however you can. Each type of connector has slightly different properties - try whatever you can and see what works. If you have a TV card that supports multiple inputs, you will need to specify the input number when you come to record. You can cut the recording into pieces during the transcoding stage, so snow/blackness won't appear in the final video.

Finding your TV norm

Most TV cards only support the TV norm of the country they were sold in (e.g. PAL-I in the UK or NTSC-M in the Americas), but it's best to confirm this just in case. Wikipedia has an image of colour systems by country and a complete list of standards with countries they're used in.

If you like, you can store your TV norm in an environment variable:

TV_NORM=<norm>

For example, if your norm was PAL-I, you might type TV_NORM=PAL-I into your terminal. This guide will use $TV_NORM to refer to your video norm - if you choose not to set an environment variable, you will need to replace instances of $TV_NORM with your TV norm.

Determining your video device

Once you have connected your input, you need to determine the name Linux gives it. See all your video devices by doing:

ls /dev/video*

One of these is the device you want. Most people only have one, or can figure it out by disconnecting devices and rerunning the above command. Otherwise, check the capabilites of each device:

for VIDEO_DEVICE in /dev/video* ; do echo ; echo ; echo $VIDEO_DEVICE ; echo ; v4l2-ctl --device=$VIDEO_DEVICE --list-inputs ; done

Usually you will see e.g. a webcam with a single input and a TV card with multiple inputs. If you're still not sure which one you want, try each one in turn:

mpv --tv-device=<device> tv:///<whichever-input-number-you-connected>

If your source is a VCR, remember to play a video so you know the right one when you see it. If you see snow when you were expecting blackness (or vice versa), double-check your input number with the output of v4l2-ctl above.

If you like, you can store your device and input number in environment variables:

VIDEO_DEVICE=<device>
VIDEO_INPUT=<whichever-input-number-you-connected>

Further examples on this page will use $VIDEO_DEVICE and $VIDEO_INPUT - you will need to replace these if you don't set environment variables.

Determining your audio device

See all of your audio devices by doing:

arecord -l

Again, it should be fairly obvious which of these is the right one. Get the device names by doing:

arecord -L | grep ^hw:

If you're not sure which one you want, try each in turn:

mpv --tv-device=$VIDEO_DEVICE --tv-adevice=<device> tv:///$VIDEO_INPUT

Again, you should hear your tape playing when you get the right one. Note: always use an ALSA hw device, as they are closest to the hardware. Pulse audio devices and ALSA's plughw devices add extra layers that, while more convenient for most uses, only cause headaches for us.

Optionally set your device in an environment variable:

AUDIO_DEVICE=<device>

Further examples on this page will use $AUDIO_DEVICE in place of an actual audio device - you will need to replace this if you don't set environment variables.

Getting your device capabilities

To find the capabilities of your video device, do:

gst-launch-1.0 --gst-debug=v4l2src:5 v4l2src device=$VIDEO_DEVICE ! fakesink 2>&1 | sed -une '/caps of src/ s/[:;] /\n/gp'

To find the capabilities of your audio device, do:

gst-launch-1.0 --gst-debug=alsa:5 alsasrc device=$AUDIO_DEVICE ! fakesink 2>&1 | sed -une '/returning caps/  s/[s;] /\n/gp'

You will need to press ctrl+c to close each of these programs when they've printed some output. When you record your video, you will need to specify capabilities based on the ranges displayed here. Some things to remember:

  • audio format is optional (your software can decide this automatically)
  • video format should be optional, but as of 2015 a bug means you should specify format=UYVY
  • video height (discussed below) should be the appropriate height for your TV norm
  • video framerate (discussed below) should be the appropriate value for your TV norm, but may need to be tweaked for your hardware
  • pixel-aspect-ratio will be set below - do not specify it here
  • for all other capabilities, just pick the highest number (or delete it altogether if there's only one choice)

For example, if your TV norm was some variant of PAL and your video card showed these results:

$ gst-launch-1.0 --gst-debug=v4l2src:5 v4l2src device=$VIDEO_DEVICE ! fakesink 2>&1 | sed -une '/caps of src/ s/[:;] /\n/gp'
0:00:00.052071821 29657      0x139fc50 DEBUG                v4l2src gstv4l2src.c:306:gst_v4l2src_negotiate:<v4l2src0> caps of src
video/x-raw, format=(string)YUY2, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)UYVY, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)Y42B, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)I420, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)YV12, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)xRGB, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)BGRx, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)RGB, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)BGR, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)RGB16, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)RGB15, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
video/x-raw, format=(string)GRAY8, framerate=(fraction)25/1, width=(int)[ 48, 720 ], height=(int)[ 32, 578 ], interlace-mode=(string)mixed, pixel-aspect-ratio=(fraction)54/59
$ gst-launch-1.0 --gst-debug=alsa:5 alsasrc device=$AUDIO_DEVICE ! fakesink 2>&1 | sed -une '/returning caps/  s/[s;] /\n/gp'
0:00:00.039231863 30898      0x25fcde0 INFO                    alsa gstalsasrc.c:318:gst_alsasrc_getcaps:<alsasrc0> returning cap
audio/x-raw, format=(string){ S16LE, U16LE }, layout=(string)interleaved, rate=(int)32000, channels=(int)2, channel-mask=(bitmask)0x0000000000000003
audio/x-raw, format=(string){ S16LE, U16LE }, layout=(string)interleaved, rate=(int)32000, channels=(int)1

Then you would select video/x-raw, format=UYVY, framerate=25/1, width=720, height=576 and audio/x-raw, rate=32000, channels=2

Once again, you can set your capabilities in an environment variable, but you will need to put quote marks around them:

VIDEO_CAPABILITIES="<capabilities>"
AUDIO_CAPABILITIES="<capabilities>"

For example, AUDIO_CAPABILITIES="audio/x-raw, rate=32000, channels=2". Further examples on this page will use $VIDEO_CAPABILITIES and $AUDIO_CAPABILITIES in place of actual capabilities - you will need to replace these if you don't set environment variables.

Video heights

Some devices report a maximum height of 578. A PAL TV signal is 576 lines tall and an NTSC signal is 486 lines, so height=578 won't give you the best picture quality. To confirm this, tune to a non-existent TV channel then take a screenshot of the snow:

gst-launch-1.0 -q v4l2src device=$VIDEO_DEVICE \
    ! $VIDEO_CAPABILITIES, height=578 \
    ! imagefreeze \
    ! autovideosink

Here's an example of what you might see - notice the blurring in the middle of the picture. Now take a screenshot with the appropriate height for your TV norm:

gst-launch-1.0 -q v4l2src device=$VIDEO_DEVICE \
    ! $VIDEO_CAPABILITIES, height=<appropriate-height> \
    ! imagefreeze \
    ! autovideosink

Here's an example taken with height=576 - notice the middle of this picture is nice and crisp.

You may want to test this yourself and set your height to whatever looks best.

Video framerates

Due to hardware issues, some V4L devices produce slightly too many (or too few) frames per second. To check your system's actual frame rate, start your video source (e.g. a VCR or webcam) then run this command:

gst-launch-1.0 v4l2src device=$VIDEO_DEVICE \
    ! $VIDEO_CAPABILITIES \
    ! fpsdisplaysink fps-update-interval=100000
  1. Let it run for 100 seconds to get a large enough sample. It should print some statistics in the bottom of the window - write down the number of frames dropped
  2. Let it run for another 100 seconds, then write down the new number of frames dropped
  3. Calculate (second number) - (first number) - 1 (e.g. 5007 - 2504 - 1 == 2502)
    • You need to subtract one because fpsdisplaysink drops one frame every time it displays the counter
  4. That number is exactly one hundred times your framerate, so you should tell your software e.g. framerate=2502/100

Note: VHS framerates can vary within the same file. To get an accurate measure of a VHS recording's framerate, encode to a format that supports variable framerates then retrieve the video's duration and total number of frames. You can then transcode a new file with your desired frame rate.

Correcting your colour settings

Most TV cards have correct colour settings by default, but if your picture looks wrong (or you just want to check), first capture an image that has a good range of colours:

mpv --tv-device=$VIDEO_DEVICE tv:///$VIDEO_INPUT

Press "s" to take screenshots, then open them in an image editor and alter the hue, saturation brightness and contrast until it looks right. If possible, print a testcard, capture a screenshot of it in good lighting conditions, then compare the captured image to the original. When you find the settings that look right, you can set your TV card.

First, make a backup of the current settings:

v4l2-ctl --device=$VIDEO_DEVICE --list-ctrls | tee tv-card-settings-$( date --iso-8601=seconds ).txt

Then input the new settings:

v4l2-ctl --device=$VIDEO_DEVICE --set-ctrl=hue=0          # set this to your preferred value
v4l2-ctl --device=$VIDEO_DEVICE --set-ctrl=saturation=64  # set this to your preferred value
v4l2-ctl --device=$VIDEO_DEVICE --set-ctrl=brightness=128 # set this to your preferred value
v4l2-ctl --device=$VIDEO_DEVICE --set-ctrl=contrast=68    # set this to your preferred value

Note: you can update these while a video is playing. If your settings are too far off, or if you're able to record a testcard, you might want to change the settings by eye before you bother with screenshots.

Encoding an accurate video

Your first step should be to record an accurate copy of your source video. A good quality encoding can use anything up to 30 gigabytes per hour, so figure out how long your video is and make sure you have enough space. Most software isn't optimised for analogue video encoding, causing audio and video to desynchronise in some circumstances.

As well as the values above, you will need to decide the following (preferably storing them as environment variables):

  • ENCODING_VIDEO_FORMAT - the format you chose to encode the accurate copy of your video (see ffmpeg -encoders for a list)
  • ENCODING_AUDIO_FORMAT - the format you chose to encode the accurate copy of your audio (see ffmpeg -encoders for a list)
  • ENCODING_MUXER_FORMAT - the format you chose to mux your videos together (see ffmpeg -formats for a list)
  • ENCODING_VIDEO_OPTIONS - settings for your video format (see ffmpeg --help encoder=$ENCODING_VIDEO_FORMAT for a list)
  • ENCODING_AUDIO_OPTIONS - settings for your audio format (see ffmpeg --help encoder=$ENCODING_AUDIO_FORMAT for a list)
  • ENCODING_MUXER_OPTIONS - settings for your muxer format (see ffmpeg --help muxer=$ENCODING_MUXER_FORMAT for a list)
  • ENCODING_FILENAME - your preferred filename (see ffmpeg --help muxer=$ENCODING_MUXER_FORMAT for suggested extensions)

GStreamer is the best program for inputting video on Linux (see GStreamer for details), but is poorly-documented and hard to use. FFmpeg has much better documentation, but can't handle the quirks of analogue video. To get the best results, you'll need to use GStreamer as an FFmpeg source:

ffmpeg \
    -i <(
        gst-launch-1.0 -q \
    	    v4l2src device=$VIDEO_DEVICE do-timestamp=true pixel-aspect-ratio=1 norm=$TV_NORM ! $VIDEO_CAPABILITIES ! mux. \
            alsasrc device=$AUDIO_DEVICE do-timestamp=true                                    ! $AUDIO_CAPABILITIES ! mux. \
            matroskamux name=mux ! fdsink fd=1
    ) \
    -c:v $ENCODING_VIDEO_FORMAT $ENCODING_VIDEO_OPTIONS \
    -c:a $ENCODING_AUDIO_FORMAT $ENCODING_AUDIO_OPTIONS \
    -f   $ENCODING_MUXER_FORMAT $ENCODING_MUXER_OPTIONS \
    "$ENCODING_FILENAME"

This command does two things:

  • tells GStreamer to record raw audio and video use a Matroska media container to communicate with FFmpeg
  • tells FFmpeg to accept the video from GStreamer and encode it using the settings you specified (e.g. remuxing to your preferred container format)

If you have enough free disk space, you could just save the raw video as your accurate copy - see GStreamer for details.

Handling desynchronised audio and video

Most people can skip this step - GStreamer should be able to synchronise your audio and video automatically using the do-timestamp setting. If your audio and video aren't synchronised (most noticeable when people's mouths don't quite move in time to their words), first check you're using a raw hw audio device, as plughw devices can cause synchronisation issues. If the problem still occurs with a raw hw audio device, your hardware may not support timestamps so you'll have to fix it during transcoding.

If you need to resync audio and video during transcoding, you can make your life easier by creating clapperboard effects at the start of your videos - hook up a camcorder, run your capture command, then clap your hands in front of the camera before pressing play on your VCR. Failing that, make note of any moments where an obvious visual element occurred at the same moment as an obvious audio element (such as a cup being placed on a table).

Once you've recorded your video, you'll need to calculate your desired A/V offset. For the best result, play your video with precise timestamps (e.g. mpv --osd-fractions "$ENCODING_FILENAME") and open your audio in an audio editor (e.g. Audacity), then find the exact frame when your clapperboard video/audio occurred and subtract one from the other. To confirm your result, run mpv --audio-delay=<result> "$ENCODING_FILENAME" and make sure it looks right.

Measuring audio noise

Your hardware will create a small amount of audio noise in your recording. If you want to remove this later, you'll need to measure it for every hardware configuration you use - S-video vs. composite, laptop charging vs. unplugged, and so on.

You'll need a recording of about half a second of your system in a resting state, which you will use later to remove noise. This can be a silent TV channel or paused tape, but if you're using composite or S-video connectors, the easiest thing is probably just to record a few moments of blackness before pressing play.

Choosing formats

Your encoding formats need to encode in real-time and lose as little information as possible. Even if you plan to throw that information away during transcoding, an accurate initial recording will give you more freedom when the time comes. For example, your muxer format should support variable frame rates so you can measure your video's frame rate. Once you have that information, you could use it to calculate an accurate transcoding frame rate or to cut out sections where your VCR delivered the wrong number of frames - either way the information is useful even though it was lost from the final video.

Transcoding a usable video

The video you recorded should accurately represent your source video, but will probably be a large file, be a noisy experience, and might not even play in some programs. You need to transcode it to a more usable format. You can use any program(s) to do this, but it's probably easiest to continue using FFmpeg:

ffmpeg -i "$ENCODING_FILENAME" \
    -c:v <transcoded-video-format> <transcoded-video-options> \
    -c:a <transcoded-audio-format> <transcoded-audio-options> \
    -f   <transcoded-muxer-format> <transcoded-muxer-options> \
    <transcoded-filename>

If you're happy with the result, you can stop here. But you might want to improve the video, for example:

This section will discuss some of the high-level issues you'll face if you choose to improve your video.

Cleaning audio

Any analogue recording will contain a certain amount of background noise. Cleaning noise is optional, and you'll always be able to produce a slightly better result if you spend a little longer on it, so this section will just introduce enough theory to get you started. Audacity's equalizer and noise reduction effect are good places to start experimenting.

The major noise sources are:

  • your audio codec might throw away sound it thinks you won't hear in order to reduce file size
  • your recording system will produce a small, consistent amount of noise based on its various electrical and mechanical components
  • VHS format limitations cause static at high and low frequencies, depending on the VCR's settings
  • imperfections in tape recording and playback produce noise that differs between recordings and even between scenes

A lossless audio format (e.g. WAV or FLAC) should ensure your original encoding doesn't produce any extra noise. Even if you transcode to a format like MP3 that throws information away, a lossless original ensures there's only one lot of noise in the result.

The primary means of reducing noise is the frequency-based noise gate, which blocks some frequencies and passes others. High-pass and low-pass filters pass noise above or below a certain frequency, and can be combined into band-pass or even multi-band filters. The rest of this section discusses how to build a series of noise gates for your audio.

Identify noise from your recording system by recording the sound of a paused tape or silent television channel for a few seconds. If possible, use the near-silence at the start of your recording so you can guarantee your sample matches your current hardware configuration. Use this baseline recording as a noise profile which your software uses to build a multi-band noise gate. You can apply that noise gate to the whole recording, and to other recordings with the same hardware that don't have a usable sample.

Identify VHS format limitations by searching online for information based on your TV norm (NTSC, PAL or SECAM), your recording quality (normal or Hi-Fi) and your VHS play mode (short- or long-play). Wikipedia's discussion of VHS audio recording is a good place to start. If you're able to find the information, gate your recordings with high-pass and low-pass filters that only allow frequencies within the range your tape actually records. For example, a long-play recording of a PAL tape will produce static below 100Hz and above 4kHz so you should gate your recording to only pass audio in the 100Hz-4000Hz range. If you can't find the information, you can determine it experimentally by trying out different filters to see what sounds right - your system probably produces static below about 10Hz or 100Hz and above about 4kHz or 12kHz, so try high- and low-pass filters in those ranges until you stop hearing background noise. If you don't remove this noise source, the next step will do a reasonable job of guessing it for you anyway.

Identify imperfections in recording and playback by watching the video and looking for periods of silence. You only need half a second of background noise to generate a profile, but the number of profiles is up to you. Some people grab one profile for a whole recording, others combine clips into averaged noise profiles, others cut audio into scenes and de-noise each in turn. At a minimum, tapes with multiple recordings should be split up and each one de-noised separately - a tape containing a TV program recorded in LP mode in one VCR followed by a home video recorded in SP in another VCR will produce two very different noise profiles, even if played back all in one go.

It's good to apply filters in the right order (system profile, then VHS limits, then recording profiles), but beyond that noise reduction is very subjective. For example, intelligent noise reduction tends to remove more noise in quiet periods but less when it would risk losing signal, which can sound a snare drum being brushed whenever someone speaks. But dumb filters silence the same frequencies at all times, which can make everything sound muffled.

You can run your audio through as many gates as you like, and even repeat the same filter several times. If you use a noise reduction profile, you can even get different results from different programs (see for example this comparison of sox and Audacity's algorithms). There's no right answer but there's always a better result if you spend a bit more time, so you'll need to decide for yourself when the result is good enough.

Cleaning video

Much like audio, you can spend as long as you like cleaning your video. But whereas audio cleaning tends to be about doing one thing really well (separating out frequencies of signal and noise), video cleaning tends to be about getting decent results in different circumstances. For example, you might want to just remove the overscan lines at the bottom of a VHS recording, denoise a video slightly to reduce file size, or aggressively remove grains to make a low-quality recording watchable. FFmpeg's video filter list is a good place to start, but here are a few things you should know.

Some programs need video to have a specified aspect ratio. If you simply crop out the ugly overscan lines at the bottom of your video, some programs may refuse to play your video. Instead you should mask the area with blackness. In ffmpeg, you would use a crop filter to remove the overscan followed by a pad filter to put the image back to its original height.

Analogue video is interlaced, essentially interleaving two consecutive video frames within each image. This confuses video filters that compare neighbouring pixels (e.g. to look for bright grains in dark areas of the screen), so you should deinterleave the frames before using such filters, then interleave them again afterwards. For example, an ffmpeg filter chain might start with il=d:d:d and end with il=i:i:i. If you skip the trailing il=i:i:i, you can see that de-interleaving works by putting each image in a different half of the frame to trick other filters into doing the right thing.

Choosing formats

Your transcoding format needs to be small and compatible with whatever software you will use to play it back. If you can't find accurate information about your players, create a short test video and try it on your system. Your video codec may well have options to reduce file size at the cost of encoding time, so you may want to leave your computer transcoding overnight to get the best file size.