--- On Tue, 18/1/11, Niko Mikkilä nm@phnet.fi wrote:
From: Niko Mikkilä nm@phnet.fi Subject: Re: [vdr] Replacing aging VDR for DVB-S2 To: "VDR Mailing List" vdr@linuxtv.org Date: Tuesday, 18 January, 2011, 13:06 On 2011-01-15 22:36 +0000, Tony Houghton wrote:
I wonder whether it might be possible to use a more
eonomical card which
is only powerful enough to decode 1080i without
deinterlacing it and
take advantage of the abundant CPU power most people
have nowadays to
perform software deinterlacing. It may not be possible
to have something
as sophisticated as NVidia's temporal + spatial, but
some of the
existing software filters should scale up to HD
without overloading the
CPU seeing as it wouldn't be doing the decoding too.
It's possible, but realtime GPU deinterlacing is more energy-efficient:
- For CPU deinterlacing, you'd need something like
Greedy2Frame or TomsMoComp. They should give about the same quality as Nvidia's temporal deinterlacer, but the code would need to be threaded to support lower-frequency multicore CPUs.
Yadif almost matches temporal+spatial in quality, but it will also be about 50% slower than Greedy2Frame.
- Hardware-decoded video is already in the GPU memory and
moving 1920x1080-pixel frames around is not free.
- Simple motion-adaptive, edge-interpolating deinterlacing
can be easily parallelized for GPU architectures, so it will be more efficient than on a serial processor. For example, GT 220 can do 1080i deinterlacing at more than 150 fps (output). Normal 50 fps deinterlacing only causes partial load and power consumption. GT 430 is currently worse because of an unoptimized filter implementation: http://nvnews.net/vbulletin/showthread.php?p=2377750#post2377750
Still, only the latest CPU generation can reach that kind of performance with a highly optimized software deinterlacer.
Alternatively, use software decoding, and hardware
deinterlacing.
GPU video decoding is very efficient thanks to dedicated hardware. I'd guess that current chips only use about 3 Watts for high-bitrate 1080i25.
Also, decoding and filtering aren't executed on the same parts of the GPU chip. They are almost perfectly parallel processes, so combined throughput will be that of the slower process.
Somewhere on linuxtv.org there's an article about
using fairly simple
OpenGL to mimic what happens to interlaced video on a
CRT, but I don't
know how good the results would look.
Sounds like normal bobbing with interpolation. Even if it simulates a phosphor delay, it probably won't look much better than MPlayer's -vf tfields or the bobber in VDPAU.
Sharp interlaced (and progressive) video is quite flickery on a CRT too.
BTW, speaking of temporal and spatial deinterlacing:
AFAICT one means
combining fields to provide maximum resolution with
half the frame rate
of the interlaced fields, and the other maximises the
frame rate while
discarding resolution; but which is which? And does
NVidia's temporal
spatial try to give the best of both worlds through
some sort of
interpolation?
Temporal = motion adaptive deinterlacing at either half or full field rate. Some programs refer to the latter by "2x". "Motion adaptive" means that the filter detects interlaced parts of each frame and adjusts deinterlacing accordingly. This gives better quality at stationary parts.
Temporal-spatial = Temporal with edge-directed interpolation to smooth jagged edges of moving objects.
Both methods give about the same spatial and temporal resolution but temporal-spatial will look nicer.
--Niko
vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
My experience with an nVidia GT220 has been less than perfect. It can perform temporal+spatial+inverse_telecine on HD video fast enough, but my PC gets hot and it truly sucks at 2:2 pulldown detection. The result of this is when viewing progressive video encoded as interlaced field pairs (2:2 pulldown), deinterlacing keeps cutting in and out every second or so, ruining the picture quality.
IMHO the best way to go for a low power HTPC is to decode in hardware e.g. VDPAU, VAAPI, but output interlaced video to your TV and let the TV sort out deinterlacing and inverse telecine.
I have experimented using FFMPEG and OpenGL and I achieved a very good quality picture on a 1080i CRT monitor (I have yet to try an HDMI flat panel TV).
These are the key requirements to achieve interlaced output:
Get the right modelines for your video card and TV. Draw interlaced fields to your frame buffer at field rate and in the correct order (top field first or bottom field first). When drawing the field to the frame buffer, do not overwrite the previous field still in the frame buffer. Maintain 1:1 vertical scaling (no vertical scaling), so you will need to switch video output to match the source video height (480i, 576i or 1080i). Display the frame buffer at field rate and synchronised to the graphics card vertical sync. Finally, there is NO requirement to synchronise fields, fields are always displayed in the same order they are written to the frame buffer, even if occasionally fields are dropped.
This can be applied to both interlaced and progressive video so you don't have to switch between interlaced and progressive output modes, except you will need to make sure you perform colour space conversion to separated fields for interlaced material and colour space conversion on whole frames for progressive material.
I believe using this approach you could use low power hardware such as ION or AMD Sempron 140 with 785G chipset for top quality set-top box style SD and HD video.
I can't comment on HD audio. That's a more complex requirement.
Stu-e