Mailing List archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[mpeg2] Re: DeInterlaced Video



On Monday 20 August 2001 09:30 am, you wrote:
> > I can't get this idea out of my mind.  If I provided the
> > kfir board with a progressive-scan signal, would it
> > encode that signal without the feathering effects?
>
> Hmmm - I read your post before and wasn't quite sure so I didn't answer...
> I have been working with some video stuff in Linux for a bit now, and I am
> familiar with the "feathering effect" that you mention you see as a result
> of how the signal is deinterlaced.  We have a camcorder that will do
> Progressive Scan output and I have played with it a bit but I don't think
> you will see any difference with the Kfir.  I think this because even
> though you may have a progressive scan source, the output of your video
> device still complies with the NTSC standard, as does the input to the
> Kfir.  So, I think that the signal entering the Kfir will still have two
> separate fields. The progressive scan stuff works really well if you are
> taking the video from the camera digitally (ie - Firewire) because it is
> not converted to NTSC and therefore you don't suffer from the deinterlacing
> problems, because the signal does not need to be deinterlaced.
>

You are, in fact, excactly right.  I went and did some research, and found
this page:
http://www.madrigal.com/PVP.html

You have come to the exact same conclusion they did.  The source,
no matter how you figure it, remains interlaced.  So even "deinterlacing"
or what they call "line doubling" still suffers feathering, because each
half frame is being written in 1/60 of a second, so fast motion will still
suffer feathering if one half frame shows a different picture than the next.

Their solution is to use a hardware-based motion compensation algorithm,
which is very expensive.

You are also correct about video cameras - they capture images frame-by-
frame (the CCD is a 3D array, with the axis being x, y, and intensity, sometimes
color also) - and they can, using hardware, either transmit that data serially (as in a 
prograssive-scan image) or interlaced, which actually requires more work because 
the video processor has to do the interlacing.

Another interesting project is this one:
http://deinterlace.sourceforge.net/

See screenshots:
http://deinterlace.sourceforge.net/screenshots/index.htm

In this project, they are using some algorithms to repair video artifacts after
encoding - but from what was previously explained on this list to me, I don't
think they are doing motion compensation, so their algorithms may not fix
feathering.

-------------------------------------------------------------------------------------------------------
Previous message:
---------------------------
Their solution only works (and is only intended) for progressive images,
that is to say, 24fps film telecine'd for NTSC or PAL broadcast. All they
are doing is undoing the field pulldown and reconstructing the progressive
images on the fly (which is actually a very impressive trick).

If your source is 60fps live video, it won't really help you.
-------------------------------------------------------------------------------------------------------

The project is very limited right now - they've even hard-coded the paths
and filenames they are working on.  But I still think their algorithms could
be used to post-process the kfir captured images - primarily to fix the
feathering.

That's all.
Torsten




Home | Main Index | Thread Index