Hi Mauro,
On Friday 30 August 2013 10:31:23 Mauro Carvalho Chehab wrote:
Em Fri, 30 Aug 2013 15:21:05 +0200 Oliver Schinagl escreveu:
On 30-08-13 15:01, Hans Verkuil wrote:
OK, I know, we don't even know yet when the mini-summit will be held but I thought I'd just start this thread to collect input for the agenda.
I have these topics (and I *know* that I am forgetting a few):
Discuss ideas/use-cases for a property-based API. An initial discussion appeared in this thread:
http://permalink.gmane.org/gmane.linux.drivers.video-input-%3E > > infrastructure/65195
What is needed to share i2c video transmitters between drm and v4l? Hopefully we will know more after the upcoming LPC.
Decide on how v4l2 support libraries should be organized. There is code for handling raw-to-sliced VBI decoding, ALSA looping, finding associated video/alsa nodes and for TV frequency tables. We should decide how that should be organized into libraries and how they should be documented. The first two aren't libraries at the moment, but I think they should be. The last two are libraries but they aren't installed. Some work is also being done on an improved version of the 'associating nodes' library that uses the MC if available.> >
Define the interaction between selection API, ENUM_FRAMESIZES and S_FMT. See this thread for all the nasty details:
Feel free to add suggestions to this list.
From my side, I'd like to discuss about a better integration between DVB and V4L2, including starting using the media controller API on DVB side too. Btw, it would be great if we could get a status about the media controller API usage on ALSA. I'm planning to work at such integration soon.
What about a hardware accelerated decoding API/framework? Is there a proper framework for this at all? I see the broadcom module is still in staging and may never come out of it, but how are other video decoding engines handled that don't have cameras or displays.
Reason for asking is that we from linux-sunxi have made some positive progress in Reverse engineering the video decoder blob of the Allwinner A10 and this knowledge will need a kernel side driver in some framework. I looked at the exynos video decoders and googling for linux-media hardware accelerated decoding doesn't yield much either.
Anyway, just a thought; if you think it's the wrong place for it to be discussed, that's ok :)
Well, the mem2mem V4L2 devices should provide all that would be needed for accelerated encoders/decoders. If not, then feel free to propose extensions to fit your needs.
Two comments regarding this:
- V4L2 mem-to-mem is great for frame-based codecs, but SoCs sometimes only implement part of the codec in hardware, leaving the rest to the software. Encoded bistream parsing is one of those areas that are left to the CPU, for instance on some ST SoCs (CC'ing Benjamin Gaignard).
- http://www.linuxplumbersconf.org/2013/ocw/sessions/1605