Hello, I'd like to replace my VDR with Duron 1200, Skystar2 and GeForce4.
I'm not particulary interested in HDTV but I'd like to prepare the new VDR for the future, even though I'm on a budget.
I'd buy Skystar2-HD with a GeForce that supports VDPAU. Would you be so kind to tell me if the SS2-HD works OK? Also with VDPAU, but more importantly in the absence of it would any of these CPU be up to the task of processing: Athlon2 X3 450 3.2GHz and Pentium E6500 or E6700 3.0GHz.
Thank you very much.
On Sun, Jan 2, 2011 at 3:52 PM, Adrian C. anrxc@sysphere.org wrote:
I can't advise on those cpu's but I'm sure others on the list can. I'd just like to mention that my experience using VDR and vdpau together has been mostly successful. _For me_, it has been stable enough to recommend the setup to other users. Also if I know I'm going to use vdpau, I go for the cheapest cpu option available since I know what to expect from vdpau performance-wise. I have no worries about it being able to handle whatever I throw it's way (currently using GT240 cards which cost me about $40/each iirc). The only exception would be VC-1 material, which I don't have much of, but again I think other users can chime in there. Considering cost and performance, I'll never even consider buying a dvb card with hardware decoding again at this point.
As with anything computer related, YMMY.
On 2011-01-02 at 18:56 -0800, VDR User wrote:
Those CPUs are fast enough for H.264 decoding, but in the absence of VDPAU, realtime 1080i deinterlacing will be difficult. You'd probably have to use a simple bobber.
VC-1 decoding gets fully offloaded on feature set B (VP3) and C (VP4) cards such as GT 220 and 240, so it should work fine. On feature set A cards it's decoded partially by libavcodec that doesn't support interlaced VC-1 streams.
-- Niko
On 3 January 2011 09:52, Adrian C. anrxc@sysphere.org wrote:
Have a read through this previous thread; http://www.mail-archive.com/vdr@linuxtv.org/msg12953.html
In general, get a gt220, as it has built in audio hardware, so that you should get audio without clock drift relative to the hdmi output. It is also powerfull enough to do temporal spatial deinterlacing on 1080i material.
If you're on a budget, a gt9500 whould do as well, although it only has spdif passthrough for audio over hdmi.
People are doing 1080p with vdpau on single core atom processors, so any modern processor that you can buy these days should do, thus all of the above.
what do you think about
NVIDIA's GeForce GT 430 http://www.anandtech.com/show/3973/nvidias-geforce-gt-430
seems it's the best choice for vdr/htpc
- more cold than gt220 - more powerfull - HDMI 1.4, - 3D over HDMI - Ethernet channel - Audio return channel - 4k × 2k Resolution Support
On Sat, Jan 15, 2011 at 1:09 PM, Goga777 goga777@bk.ru wrote:
It's a nice card but I'm not sure why you think it's the best choice for VDR/htpc. It's not going to give you any better image quality on HD content then you get from a gt220 at half the price. I don't see any advantage for most users in spending the extra money for one.
On 15/01/11 21:49, VDR User wrote:
[Snip]
what do you think about NVIDIA's GeForce GT 430
[Snip]
Even if it does run cooler than a GT220 it can't be by much judging by the size of the heatsinks. Ones with fans might be too noisy in an HTPC, and ones without will need a well-ventilated case, bearing in mind they might be working quite hard decoding HD for long periods. So...
I wonder whether it might be possible to use a more eonomical card which is only powerful enough to decode 1080i without deinterlacing it and take advantage of the abundant CPU power most people have nowadays to perform software deinterlacing. It may not be possible to have something as sophisticated as NVidia's temporal + spatial, but some of the existing software filters should scale up to HD without overloading the CPU seeing as it wouldn't be doing the decoding too.
Alternatively, use software decoding, and hardware deinterlacing. Somewhere on linuxtv.org there's an article about using fairly simple OpenGL to mimic what happens to interlaced video on a CRT, but I don't know how good the results would look.
BTW, speaking of temporal and spatial deinterlacing: AFAICT one means combining fields to provide maximum resolution with half the frame rate of the interlaced fields, and the other maximises the frame rate while discarding resolution; but which is which? And does NVidia's temporal + spatial try to give the best of both worlds through some sort of interpolation?
On Sat, Jan 15, 2011 at 2:36 PM, Tony Houghton h@realh.co.uk wrote:
Well, you can get a gt220 for around $40USD which does full rate temporal-spatial 1080i and allows you to use it with an old slow cpu's that are dirt cheap if you don't already have one collecting dust in your basement. Not sure how much more economical you can get aside of free.
On 16/01/11 01:16, VDR User wrote:
I also/mainly mean more economical in power consumption and ease of installation and cooling. Most cheap GT220s have fans (most likely cheap & noisy ones) so I wouldn't want one of them in my HTPC. A fanless one might overheat being packed in closely with my DVB cards. But many motherboards already have integrated NVidia chipsets with HDMI, including audio, and basic VDPAU functionality. Mine is an 8200 and I know there's also been a lot of interest in Ion systems for HTPCs, so I think finding some way of getting these systems to display 1080i nicely should be a good move.
On Sun, Jan 16, 2011 at 6:00 AM, Tony Houghton h@realh.co.uk wrote:
It's a bad assumption to say lesser expensive gt220 cards have cheap and noisy fans. It's simply not true. It's funny you mention ion as well. I have both ion and ion2 systems as well. One I'm using as a full time htpc, the other is a test box at the moment. And they do 1080i just fine. The ion1 box can't do temporal-spatial on 1080i but it does temporal just fine. I'm very satisfies with the very low power and no noise from the ion's.
Maybe a better idea is to not assume anything at all, but rather actually look up real life data or just buy one and see for yourself (as I did). There's no reason to take guesses about any of this stuff, plenty of users have posts their results and specs at various forums. A good place to start would be nvnews.net and read the thread "VDPAU testing tool".
Cheers
On Sun, Jan 16, 2011 at 9:42 AM, Eric Valette eric.valette@free.fr wrote:
The ion2 is currently being used for testing. It actually can do just over 60 fields temporal-spatial on 1080i with the latest stable driver 260.19.29. IIRC previous driver versions had some issues there.
On Sun, 16 Jan 2011 09:33:30 -0800 VDR User user.vdr@gmail.com wrote:
It's a bad assumption to say lesser expensive gt220 cards have cheap and noisy fans. It's simply not true.
I've bought many graphics cards over the years and every time one came with a fan it's been noisy and I've replaced it with an aftermarket cooler with a bigger heatsink, and either a bigger fan(s) or no fan.
People have different standards of noisy. If everyone was as demanding as me they wouldn't have considered using an XBox as a media player!
The pictures of these cards are enough for me, I'm sticking to my assumption that if I bought a GT220 I'd have to budget for either getting a specialist model with silent cooler, or replacing the cooler myself.
The results don't give the right information to determine how well a card can handle 1080i.
On Sun, Jan 16, 2011 at 10:22 AM, Tony Houghton h@realh.co.uk wrote:
Indeed they do. I'm particular about noise as I use htpc's with my televisions. I don't want to watch something and have to listen to a fan. If I can barely hear a fan with the tv off, that is acceptable but it must be very low noise.
No, pictures aren't enough. That's as silly as saying you can look at a car and somehow magically know how it handles while driving. Sorry, doesn't cut it.
You apparently don't know the results come from analyzing actual playback of actual samples of actual content. Yes, the data tells you exactly what kind of performance you can expect since it's generated from actual use cases. Again, stop assuming everything and turning your nose up at first-hand experience. I've ran those tests myself, obviously know what deinterlacers I'm using, and have watched plenty of content seeing the result with my own eyes from the hardware we're talking about. Additionally I've done the same with various hardware configurations.. What you're telling people simply doesn't agree with reality.
I've avoided the noise problem by putting the VDR under the stairs where it can make as much noise as it likes. There it plugs in to a X-VGA splitter/broadcaster which sends duplicate signals over CAT-5 to each TV, where another small STB converts the signal back in to VGA. I've also put Infrared extenders everywhere. Result - a TV with no other hardware visible: no cables, no equipment, nothing. Just a TV on a wall bracket. Wife happy!
On Mon, 17 Jan 2011 09:53:00 +1300 "Simon Baxter" linuxtv@nzbaxters.com wrote:
Does that work with HD without much quality compromise?
The VGA adapter I bought supports my TV which does 1366x768 just fine. It will also do 1920 resolution (I think) but my TV won't do that anyway.
Picture is perfect - no complaints. Only thing my setup won't do is different front ends - but I have no need to watch different things in different rooms.
On Sun, 16 Jan 2011 10:46:27 -0800 VDR User user.vdr@gmail.com wrote:
I can tell the difference between a Lotus Elise and a Volvo 740 by looking at pictures as well as I can tell the difference between a cooler designed to be silent and a cooler designed to be cheap.
What models of GT220 do you use?
I only looked at the first page and didn't notice that the tool had been improved with more useful tests since the early postings. I'll have to give it a try myself.
On Sun, 16 Jan 2011 10:46:27 -0800 VDR User user.vdr@gmail.com wrote:
I've attached the results of qvdpautest on my desktop PC. Some of the examples appeared to have no more than 2 or 3 frames. Does the test generate a 'realistic' stream using the same few source frames over and over again? Even if it does, it seems a rather narrow sample.
The MIXER results show unrealistically high fps. Evidently the deinterlacing is not being performed at the same time as decoding in these tests. I suppose it's easy enough to caclulate the frame rate of both operations combined for a worst case, but how do you know to what extent they can be performed in parallel?
On 2011-01-15 22:36 +0000, Tony Houghton wrote:
I wonder whether it might be possible to use a more eonomical card
which
something
as sophisticated as NVidia's temporal + spatial, but some of the existing software filters should scale up to HD without overloading
the
CPU seeing as it wouldn't be doing the decoding too.
It's possible, but realtime GPU deinterlacing is more energy-efficient:
- For CPU deinterlacing, you'd need something like Greedy2Frame or TomsMoComp. They should give about the same quality as Nvidia's temporal deinterlacer, but the code would need to be threaded to support lower-frequency multicore CPUs.
Yadif almost matches temporal+spatial in quality, but it will also be about 50% slower than Greedy2Frame.
- Hardware-decoded video is already in the GPU memory and moving 1920x1080-pixel frames around is not free.
- Simple motion-adaptive, edge-interpolating deinterlacing can be easily parallelized for GPU architectures, so it will be more efficient than on a serial processor. For example, GT 220 can do 1080i deinterlacing at more than 150 fps (output). Normal 50 fps deinterlacing only causes partial load and power consumption. GT 430 is currently worse because of an unoptimized filter implementation: http://nvnews.net/vbulletin/showthread.php?p=2377750#post2377750
Still, only the latest CPU generation can reach that kind of performance with a highly optimized software deinterlacer.
Alternatively, use software decoding, and hardware deinterlacing.
GPU video decoding is very efficient thanks to dedicated hardware. I'd guess that current chips only use about 3 Watts for high-bitrate 1080i25.
Also, decoding and filtering aren't executed on the same parts of the GPU chip. They are almost perfectly parallel processes, so combined throughput will be that of the slower process.
Sounds like normal bobbing with interpolation. Even if it simulates a phosphor delay, it probably won't look much better than MPlayer's -vf tfields or the bobber in VDPAU.
Sharp interlaced (and progressive) video is quite flickery on a CRT too.
BTW, speaking of temporal and spatial deinterlacing: AFAICT one means combining fields to provide maximum resolution with half the frame
rate
of the interlaced fields, and the other maximises the frame rate while discarding resolution; but which is which? And does NVidia's temporal
+
spatial try to give the best of both worlds through some sort of interpolation?
Temporal = motion adaptive deinterlacing at either half or full field rate. Some programs refer to the latter by "2x". "Motion adaptive" means that the filter detects interlaced parts of each frame and adjusts deinterlacing accordingly. This gives better quality at stationary parts.
Temporal-spatial = Temporal with edge-directed interpolation to smooth jagged edges of moving objects.
Both methods give about the same spatial and temporal resolution but temporal-spatial will look nicer.
--Niko
On Tue, 18 Jan 2011 15:06:50 +0200 Niko Mikkilä nm@phnet.fi wrote:
I still can't translate that explanation into simple mechanics. Is temporal like weave and spatial like bob or the other way round? Or something a little more sophisticated, interpolating parts of the picture belonging to the "wrong" field from previous and/or next frames?
On 2011-01-18 14:49 +0000, Tony Houghton wrote:
"Temporal 1x" weaves the parts of the frame that aren't combed (stationary objects) and interpolates one of the fields to fill the combed parts. I don't think it uses temporal information from other fields while interpolating. That would result in blurry video without motion compensation, which is too heavy at least for low-end GPUs. The output rate for 50 Hz interlaced video is 25 fps.
"Temporal 2x" does the same but outputs one frame for each input field, keeping full temporal and spatial resolution. Output rate is 50 fps.
"Temporal spatial 1x" does the same as "temporal 1x" but it smoothes the rough diagonal edges in interpolated parts of the frame. Output rate is 25 fps.
"Temporal spatial 2x" does the same as "temporal 2x" but it smoothes the edges. Output rate is 50 fps.
So the "temporal" part refers to motion-adaptiveness, or some kind of combing detection in a weaved frame. I haven't written a deinterlacer myself, so can't say what the used methods are exactly. If you want to know more about the "spatial" part of these filters, search for Edge-Directed Interpolation (EDI). Yadif uses a similar technique.
--Niko
--- On Tue, 18/1/11, Niko Mikkilä nm@phnet.fi wrote:
My experience with an nVidia GT220 has been less than perfect. It can perform temporal+spatial+inverse_telecine on HD video fast enough, but my PC gets hot and it truly sucks at 2:2 pulldown detection. The result of this is when viewing progressive video encoded as interlaced field pairs (2:2 pulldown), deinterlacing keeps cutting in and out every second or so, ruining the picture quality.
IMHO the best way to go for a low power HTPC is to decode in hardware e.g. VDPAU, VAAPI, but output interlaced video to your TV and let the TV sort out deinterlacing and inverse telecine.
I have experimented using FFMPEG and OpenGL and I achieved a very good quality picture on a 1080i CRT monitor (I have yet to try an HDMI flat panel TV).
These are the key requirements to achieve interlaced output:
Get the right modelines for your video card and TV. Draw interlaced fields to your frame buffer at field rate and in the correct order (top field first or bottom field first). When drawing the field to the frame buffer, do not overwrite the previous field still in the frame buffer. Maintain 1:1 vertical scaling (no vertical scaling), so you will need to switch video output to match the source video height (480i, 576i or 1080i). Display the frame buffer at field rate and synchronised to the graphics card vertical sync. Finally, there is NO requirement to synchronise fields, fields are always displayed in the same order they are written to the frame buffer, even if occasionally fields are dropped.
This can be applied to both interlaced and progressive video so you don't have to switch between interlaced and progressive output modes, except you will need to make sure you perform colour space conversion to separated fields for interlaced material and colour space conversion on whole frames for progressive material.
I believe using this approach you could use low power hardware such as ION or AMD Sempron 140 with 785G chipset for top quality set-top box style SD and HD video.
I can't comment on HD audio. That's a more complex requirement.
Stu-e
ke, 2011-01-19 kello 10:18 +0000, Stuart Morris kirjoitti:
I think VDPAU's inverse telecine is only meant for non-even cadences like 3:2. Motion-adaptive deinterlacing handles 2:2 pullup perfectly well, so try without IVTC.
Well, flat panel TVs have similar deinterlacing algorithms as what VDPAU provides, but it would certainly be a nice alternative.
Interesting. Could you perhaps write full instructions to some suitable wiki and post the code that you used to do this? I'm sure others would like to try it too.
--Niko
Replying to myself...
ke, 2011-01-19 kello 12:48 +0200, Niko Mikkilä kirjoitti:
Not perfectly well apparenty; there will be slight artifacting at sharp horizontal edges, so the trigger to deinterlace is pretty low. Probably to avoid any visible combing in interlaced video.
Pullup seems to work fine for me though, but I only have VP2/"VDPAU feature set A" hardware.
--Niko
--- On Wed, 19/1/11, Niko Mikkilä nm@phnet.fi wrote:
My problems with VDPAU inverse-telecine were apparent only on HD video. It did seem to be ok with SD video. With HD video, if I disabled inverse-telecine and left the advanced deinterlacer on, it (not surprisingly) deinterlaces the progressive picture resulting in loss of detail and twittering. For progressive HD material I have to manually turn off deinterlacing, then turn it on again for interlaced material. That's annoying.
On Wed, 19 Jan 2011 12:36:19 +0000 (GMT) Stuart Morris stuart_morris@talk21.com wrote:
For progressive HD material I have to manually turn off deinterlacing, then turn it on again for interlaced material. That's annoying.
I thought there was supposed to be a flag in MPEG meta data which indicates whether pairs of fields are interlaced or progressive so decoders can determine how to combine them without doing any complicated picture analysis. Are broadcasters not using the flag properly, or xine not reading it? xine-ui's preferences dialog has an option to disable interlacing for progressive material, have you set that in whichever front-end you're using?
On 19 January 2011 23:47, Tony Houghton h@realh.co.uk wrote:
Broadcasters can't even get the EPG data correct.
--- On Wed, 19/1/11, Torgeir Veimo torgeir@netenviron.com wrote:
In my limited experience, watching UK Freeview recordings made with VDR, using Xines TVtime deinterlacer, with the progressive frame flag option set, deinterlace is on all of the time including video derived from a progressive film source, which is wrong.
I think it is safe to rely on this flag for deciding on whether to convert colour space in fields or frames, but it seems it gives you no clues whether to deinterlace or not.
On Wed, Jan 19, 2011 at 5:47 AM, Tony Houghton h@realh.co.uk wrote:
There is. Unfortunately I can't begin to count the number of times I've seen the flag set incorrectly, essentially making it useless.
Is it possible to figure out if the stream is interlaced or not by looking at the stream? Seems like it should be able to figure out within a frame or two (.033ms) and then just ignore the useless flags? Needs to be done with epg data. I think the Insignia boxes just try to read data regardless of flags because they are able to find data when atscepg won't.
On 1/19/2011 8:55 AM, VDR User wrote:
I believe the issue with this flag is understandable when you consider the very simple nature of most set-top boxes decoding broadcast digital TV. It will always send video to the TV interlaced regardless of the content. So it does not care about de-interlacing. However it does need to know how to convert the decoded frame colour space and for this the interlace flag I suspect can be relied upon.
If the content is flagged as interlaced, separate the decoded YUV frame into separate YUV fields then convert to RGB. If the flag is clear convert the decoded YUV frame to RGB. For all material send to the TV interlaced at the appropriate resolution.
This will also be important if the applicatgion is ever likely to display video media other than broadcast TV where it is flagged as progressive.
If however you wish to de-interlace the picture you will need sophisticated pulldown detection which will disable the deinterlacer when progressive content is detected. To detect 2:2 pulldown for example (typical progressive source material broadcast in the UK) you would need to detect combing artefacts within successive decoded frames. No combing/mouse teeth for several consecutive frames would then cause the de-interlacer to be disabled.
3:2 pulldown is a little easier to detect because there are flags to indicate fields must be repeated. However reconstruction and display of 3:2 video is more complicated.
--- On Wed, 19/1/11, Timothy D. Lenz tlenz@vorgon.com wrote:
You can't depend on the flag. It's a strange one. I have a channel that is reported as 1080i by the femon plugin but deint has to be off sometimes to reduce jitter. Other times it can be on. The FCC has gotten very lax in requirments and even more lax in inforcing what rules they do have.
On 1/19/2011 6:47 AM, Tony Houghton wrote:
I thought it had to be deinterlaced as it was decoded. If we could just decode and send at was ever res (720p, 1080i, 1080p) the stream is in, then work would be offloaded to the TV. Might be a nice option for those of us with marginal video cards.
On 1/19/2011 3:48 AM, Niko Mikkilä wrote:
On 19 January 2011 20:18, Stuart Morris stuart_morris@talk21.com wrote:
IMHO the best way to go for a low power HTPC is to decode in hardware e.g. VDPAU, VAAPI, but output interlaced video to your TV and let the TV sort out deinterlacing and inverse telecine.
Unfortunately, with VDPAU, the hardware combines fields into frames, then scales, which results in ghosting with interlaced material.
So this approach would not work with stock xineliboutput, which uses a fixed output resolution. If you could avoid the scaling altogether with interlaced material, eg with a modified xineliboutput setup, then this would be feasible I guess.
ref: http://www.mail-archive.com/vdr@linuxtv.org/msg09259.html http://www.mail-archive.com/xorg@lists.freedesktop.org/msg05270.html http://www.mail-archive.com/xorg@lists.freedesktop.org/msg05610.html
--- On Wed, 19/1/11, Torgeir Veimo torgeir@netenviron.com wrote:
One would need to be able to access the decoded frame containing 2 fields and perhaps use an OpenGL shader to perform field based colour space conversion and then draw the first field to the frame buffer. At the next vertical sync the shader would convert the second field and draw that to the frame buffer. With VDPAU is there a new OpenGL interop function that allows access to the decoded frame?
I should add I have not yet ventured into writing OpenGL shaders!
Stu-e
Hi,
Am 19.01.2011 13:42, schrieb Stuart Morris:
If you enable bob deinterlacing you'll get that. Just set an interlaced video mode of appropriate resolution. Cannot tell whether VDPAU honors TOP/BOTTOM field flag and displays the frame when the field is due. This was always a problem with xxmc and VIA EPIA CLE 266. Incorrect field order is most noticeable on fast movements.
Bye.
--- On Thu, 20/1/11, Reinhard Nissl rnissl@gmx.de wrote:
It is very similar to bob except the crucial difference is that both of the most recent fields are present in the frame buffer at the same time to avoid field display order problems. Where bob would simply scale each field to the full height of the frame buffer at field rate, we would need each field line drawn on alternate lines leaving the previous field on the lines in between (i.e. weave the 2 most recent fields together at field rate).
Conventional bob on an interlaced display would sometimes display correctly and sometimes not depending on luck, because field synch would be required.
On Wed, Jan 19, 2011 at 12:42:40PM +0000, Stuart Morris wrote:
that's not the whole story. You still have to consider synchronicity between incoming data rate (TV-stream) and outgoing data rate (VGA/Video timing).
Want to say: VGA/Video timing must be dynamic at least in very small increments. AFAIK no graphics hardware support for this feature exists until today.
Greetings from my [1] project. I didn't proceed with it lacking more recent HDTV capable graphics hardware suitable for this idea.
- Thomas
[1] http://lowbyte.de/vga-sync-fields/vga-sync-fields/README
On 2011-01-22 08:16 +0100, Thomas Hilber wrote:
Yep. As Stuart said, framedrops/duplicates will happen, but with his drawing technique they don't cause the player to lose field sync. I think that's already quite acceptable, since at least recordings can be played without any video judder if audio is resampled.
--Niko
--- On Sat, 22/1/11, Niko Mikkilä nm@phnet.fi wrote:
Standard definition video is going to be harder than I thought. I used xrandr to set this mode via HDMI to my LCD TV: # 1440x576i @ 50Hz (EIA/CEA-861B) ModeLine "1440x576" 27.000 1440 1464 1590 1728 576 581 587 625 -hsync -vsync Interlace The TV reported mode 576i ok, but the desktop graphics were unreadable. I tried to view an interlaced standard def video using my little test application and it looked awful. However the 1080i mode worked very well: # 1920x1080i @ 50Hz (EIA/CEA-861B) Modeline "1920x1080" 74.250 1920 2448 2492 2640 1080 1085 1095 1125 +hsync +vsync Interlace
I think for standard definition video via HDMI there will be a need to upscale to a resolution better supported by HDMI and that requires inverse telecine and deinterlacing. This may still be within the capabilities of todays low power systems.
My little test has staisfied me that 1080i or 1080p video can be displayed with interlaced output.
Stuart
BTW my hardware setup was an old Sony KDL32V2000, and AMD HD4200 integrated graphics with the AMD closed driver.
On 28.01.2011 10:57, Stuart Morris wrote: [..]
I guess that's because it's a very strange resolution with strange aspect ratio, shouldn't that have been 1024x576i to maintain a 16:9 aspect ratio with square pixels? I only found 1440 on BBC HD which broadcasts in 1080i but sets the aspect ratio flag to 16:9...
Lucian
--- On Fri, 28/1/11, Lucian Muresan lucianm@users.sourceforge.net wrote:
The HDMI spec has a minimum pixel clock rate, such that modes like 576i and 480i must repeat horizontal pixels to maintain a pixel rate above the minimum. There is also an embedded information field in the HDMI link that tells the HDMI sink (the TV) which pixel(s) to discard. There appears to be no way to control this information and the graphics card I assume is interpolating horizontally anyway (not repeating). This might explain why the display looked so awful.
Stuart
On Fri, 28 Jan 2011 09:57:50 +0000 (GMT) Stuart Morris stuart_morris@talk21.com wrote:
I have the same model of TV (I still think of mine as quite new!). For SD I just use 1280x720 progressive. The PC can deinterlace and upscale 576i with negligible CPU/GPU. I have to say xine's software rendering doesn't give as good a picture as the TV's DVB-T, but I thought subjectively upscaling to 720p looked better than using a native 576 line mode. I haven't had much success with libxine and VDPAU so far, but I haven't tried since updating my NVidia drivers etc to Debian "experimental" (260.19.21). The "unstable" ones are quite out of date (195.36.31) because of the impending Debian release. I've had VDPAU working OK in mplayer for ages though.
The TV's 1280x720 modes are better for video than the 1360x768 native resolution because it automatically turns on some processing and colour balance features, but the overscan and scaling make it unsuitable for the desktop.
Another feature of this TV is that 1280x720 forces 16:9, but 720x576 enables the various options for 4:3 (centre, zoom, "smart", 14:9) so you could use mode switching as a form of aspect ratio signalling. However, changing mode causes the picture and sound to blank out for several seconds :-(.
Don't forget that modern LCD screens only have the res they are rated for. So anything you send needs to be an exact division of that or you will have pixels lost or merged with others as they fall between displayable pixels. CRT's had more points of light they the highest res they where rated for. So you could deal with odd multiples better.
On 1/28/2011 2:57 AM, Stuart Morris wrote:
grr, nvidia and their stupid naming system. 430 looks more like a 5xx card but with the discontinued number line. 4xx was being replaced by refined 5xx. This is the first non-crippled chip released with a 4xx number. So once again, like with the 8400's we can't be sure what die it's based on.
On 1/15/2011 2:09 PM, Goga777 wrote:
Thanks for the answers guys.