OK, I know, we don't even know yet when the mini-summit will be held but I thought I'd just start this thread to collect input for the agenda.
I have these topics (and I *know* that I am forgetting a few):
- Discuss ideas/use-cases for a property-based API. An initial discussion appeared in this thread:
http://permalink.gmane.org/gmane.linux.drivers.video-input-infrastructure/65...
- What is needed to share i2c video transmitters between drm and v4l? Hopefully we will know more after the upcoming LPC.
- Decide on how v4l2 support libraries should be organized. There is code for handling raw-to-sliced VBI decoding, ALSA looping, finding associated video/alsa nodes and for TV frequency tables. We should decide how that should be organized into libraries and how they should be documented. The first two aren't libraries at the moment, but I think they should be. The last two are libraries but they aren't installed. Some work is also being done on an improved version of the 'associating nodes' library that uses the MC if available.
- Define the interaction between selection API, ENUM_FRAMESIZES and S_FMT. See this thread for all the nasty details:
http://www.spinics.net/lists/linux-media/msg65137.html
Feel free to add suggestions to this list.
Note: my email availability will be limited in the next three weeks, especially next week, as I am travelling a lot.
Regards,
Hans
On 30-08-13 15:01, Hans Verkuil wrote:
OK, I know, we don't even know yet when the mini-summit will be held but I thought I'd just start this thread to collect input for the agenda.
I have these topics (and I *know* that I am forgetting a few):
Discuss ideas/use-cases for a property-based API. An initial discussion appeared in this thread:
http://permalink.gmane.org/gmane.linux.drivers.video-input-infrastructure/65...
What is needed to share i2c video transmitters between drm and v4l? Hopefully we will know more after the upcoming LPC.
Decide on how v4l2 support libraries should be organized. There is code for handling raw-to-sliced VBI decoding, ALSA looping, finding associated video/alsa nodes and for TV frequency tables. We should decide how that should be organized into libraries and how they should be documented. The first two aren't libraries at the moment, but I think they should be. The last two are libraries but they aren't installed. Some work is also being done on an improved version of the 'associating nodes' library that uses the MC if available.
Define the interaction between selection API, ENUM_FRAMESIZES and S_FMT. See this thread for all the nasty details:
Feel free to add suggestions to this list.
What about a hardware accelerated decoding API/framework? Is there a proper framework for this at all? I see the broadcom module is still in staging and may never come out of it, but how are other video decoding engines handled that don't have cameras or displays.
Reason for asking is that we from linux-sunxi have made some positive progress in Reverse engineering the video decoder blob of the Allwinner A10 and this knowledge will need a kernel side driver in some framework. I looked at the exynos video decoders and googling for linux-media hardware accelerated decoding doesn't yield much either.
Anyway, just a thought; if you think it's the wrong place for it to be discussed, that's ok :)
oliver
Note: my email availability will be limited in the next three weeks, especially next week, as I am travelling a lot.
Regards,
Hans
To unsubscribe from this list: send the line "unsubscribe linux-media" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Em Fri, 30 Aug 2013 15:21:05 +0200 Oliver Schinagl oliver+list@schinagl.nl escreveu:
On 30-08-13 15:01, Hans Verkuil wrote:
OK, I know, we don't even know yet when the mini-summit will be held but I thought I'd just start this thread to collect input for the agenda.
I have these topics (and I *know* that I am forgetting a few):
Discuss ideas/use-cases for a property-based API. An initial discussion appeared in this thread:
http://permalink.gmane.org/gmane.linux.drivers.video-input-infrastructure/65...
What is needed to share i2c video transmitters between drm and v4l? Hopefully we will know more after the upcoming LPC.
Decide on how v4l2 support libraries should be organized. There is code for handling raw-to-sliced VBI decoding, ALSA looping, finding associated video/alsa nodes and for TV frequency tables. We should decide how that should be organized into libraries and how they should be documented. The first two aren't libraries at the moment, but I think they should be. The last two are libraries but they aren't installed. Some work is also being done on an improved version of the 'associating nodes' library that uses the MC if available.
Define the interaction between selection API, ENUM_FRAMESIZES and S_FMT. See this thread for all the nasty details:
Feel free to add suggestions to this list.
From my side, I'd like to discuss about a better integration between DVB and V4L2, including starting using the media controller API on DVB side too. Btw, it would be great if we could get a status about the media controller API usage on ALSA. I'm planning to work at such integration soon.
What about a hardware accelerated decoding API/framework? Is there a proper framework for this at all? I see the broadcom module is still in staging and may never come out of it, but how are other video decoding engines handled that don't have cameras or displays.
Reason for asking is that we from linux-sunxi have made some positive progress in Reverse engineering the video decoder blob of the Allwinner A10 and this knowledge will need a kernel side driver in some framework. I looked at the exynos video decoders and googling for linux-media hardware accelerated decoding doesn't yield much either.
Anyway, just a thought; if you think it's the wrong place for it to be discussed, that's ok :)
Well, the mem2mem V4L2 devices should provide all that would be needed for accelerated encoders/decoders. If not, then feel free to propose extensions to fit your needs.
Regards, Mauro
oliver
Note: my email availability will be limited in the next three weeks, especially next week, as I am travelling a lot.
Regards,
Hans
To unsubscribe from this list: send the line "unsubscribe linux-media" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
-- To unsubscribe from this list: send the line "unsubscribe linux-media" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Mauro,
On Friday 30 August 2013 10:31:23 Mauro Carvalho Chehab wrote:
Em Fri, 30 Aug 2013 15:21:05 +0200 Oliver Schinagl escreveu:
On 30-08-13 15:01, Hans Verkuil wrote:
OK, I know, we don't even know yet when the mini-summit will be held but I thought I'd just start this thread to collect input for the agenda.
I have these topics (and I *know* that I am forgetting a few):
Discuss ideas/use-cases for a property-based API. An initial discussion appeared in this thread:
http://permalink.gmane.org/gmane.linux.drivers.video-input-%3E > > infrastructure/65195
What is needed to share i2c video transmitters between drm and v4l? Hopefully we will know more after the upcoming LPC.
Decide on how v4l2 support libraries should be organized. There is code for handling raw-to-sliced VBI decoding, ALSA looping, finding associated video/alsa nodes and for TV frequency tables. We should decide how that should be organized into libraries and how they should be documented. The first two aren't libraries at the moment, but I think they should be. The last two are libraries but they aren't installed. Some work is also being done on an improved version of the 'associating nodes' library that uses the MC if available.> >
Define the interaction between selection API, ENUM_FRAMESIZES and S_FMT. See this thread for all the nasty details:
Feel free to add suggestions to this list.
From my side, I'd like to discuss about a better integration between DVB and V4L2, including starting using the media controller API on DVB side too. Btw, it would be great if we could get a status about the media controller API usage on ALSA. I'm planning to work at such integration soon.
What about a hardware accelerated decoding API/framework? Is there a proper framework for this at all? I see the broadcom module is still in staging and may never come out of it, but how are other video decoding engines handled that don't have cameras or displays.
Reason for asking is that we from linux-sunxi have made some positive progress in Reverse engineering the video decoder blob of the Allwinner A10 and this knowledge will need a kernel side driver in some framework. I looked at the exynos video decoders and googling for linux-media hardware accelerated decoding doesn't yield much either.
Anyway, just a thought; if you think it's the wrong place for it to be discussed, that's ok :)
Well, the mem2mem V4L2 devices should provide all that would be needed for accelerated encoders/decoders. If not, then feel free to propose extensions to fit your needs.
Two comments regarding this:
- V4L2 mem-to-mem is great for frame-based codecs, but SoCs sometimes only implement part of the codec in hardware, leaving the rest to the software. Encoded bistream parsing is one of those areas that are left to the CPU, for instance on some ST SoCs (CC'ing Benjamin Gaignard).
- http://www.linuxplumbersconf.org/2013/ocw/sessions/1605
On Sat, Aug 31, 2013 at 1:54 AM, Laurent Pinchart < laurent.pinchart@ideasonboard.com> wrote:
Hi Mauro,
On Friday 30 August 2013 10:31:23 Mauro Carvalho Chehab wrote:
Em Fri, 30 Aug 2013 15:21:05 +0200 Oliver Schinagl escreveu:
On 30-08-13 15:01, Hans Verkuil wrote:
OK, I know, we don't even know yet when the mini-summit will be held
but
I thought I'd just start this thread to collect input for the agenda.
I have these topics (and I *know* that I am forgetting a few):
Discuss ideas/use-cases for a property-based API. An initial discussion appeared in this thread:
http://permalink.gmane.org/gmane.linux.drivers.video-input-%3E > >
infrastructure/65195
What is needed to share i2c video transmitters between drm and v4l? Hopefully we will know more after the upcoming LPC.
Decide on how v4l2 support libraries should be organized. There is code for handling raw-to-sliced VBI decoding, ALSA looping, finding associated video/alsa nodes and for TV frequency tables. We should decide how that should be organized into libraries and how they
should
be documented. The first two aren't libraries at the moment, but I think they should be. The last two are libraries but they aren't installed. Some work is also being done on an improved version of the 'associating nodes' library that uses the MC if available.> >
Define the interaction between selection API, ENUM_FRAMESIZES and S_FMT. See this thread for all the nasty details:
Feel free to add suggestions to this list.
From my side, I'd like to discuss about a better integration between DVB and V4L2, including starting using the media controller API on DVB side too. Btw, it would be great if we could get a status about the media controller API usage on ALSA. I'm planning to work at such integration soon.
What about a hardware accelerated decoding API/framework? Is there a proper framework for this at all? I see the broadcom module is still in staging and may never come out of it, but how are other video decoding engines handled that don't have cameras or displays.
Reason for asking is that we from linux-sunxi have made some positive progress in Reverse engineering the video decoder blob of the Allwinner A10 and this knowledge will need a kernel side driver in some
framework.
I looked at the exynos video decoders and googling for linux-media hardware accelerated decoding doesn't yield much either.
Anyway, just a thought; if you think it's the wrong place for it to be discussed, that's ok :)
Well, the mem2mem V4L2 devices should provide all that would be needed
for
accelerated encoders/decoders. If not, then feel free to propose
extensions
to fit your needs.
Two comments regarding this:
- V4L2 mem-to-mem is great for frame-based codecs, but SoCs sometimes only
implement part of the codec in hardware, leaving the rest to the software. Encoded bistream parsing is one of those areas that are left to the CPU, for instance on some ST SoCs (CC'ing Benjamin Gaignard).
This is an interesting topic for me as well, although I'm still not sure if I can make it to the workshop. Would it make sense to have v4l parser plugins hook up to qbuf and do the parsing there?
-- Regards,
Laurent Pinchart
-- To unsubscribe from this list: send the line "unsubscribe linux-media" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Pawel,
On Saturday 31 August 2013 08:58:41 Pawel Osciak wrote:
On Sat, Aug 31, 2013 at 1:54 AM, Laurent Pinchart wrote:
On Friday 30 August 2013 10:31:23 Mauro Carvalho Chehab wrote:
Em Fri, 30 Aug 2013 15:21:05 +0200 Oliver Schinagl escreveu:
On 30-08-13 15:01, Hans Verkuil wrote:
OK, I know, we don't even know yet when the mini-summit will be held but I thought I'd just start this thread to collect input for the agenda.
I have these topics (and I *know* that I am forgetting a few):
Discuss ideas/use-cases for a property-based API. An initial discussion appeared in this thread:
http://permalink.gmane.org/gmane.linux.drivers.video-input-%3E > > > > infrastructure/65195
What is needed to share i2c video transmitters between drm and v4l? Hopefully we will know more after the upcoming LPC.
Decide on how v4l2 support libraries should be organized. There is code for handling raw-to-sliced VBI decoding, ALSA looping, finding associated video/alsa nodes and for TV frequency tables. We should decide how that should be organized into libraries and how they should be documented. The first two aren't libraries at the moment, but I think they should be. The last two are libraries but they aren't installed. Some work is also being done on an improved version of the 'associating nodes' library that uses the MC if available.
Define the interaction between selection API, ENUM_FRAMESIZES and S_FMT. See this thread for all the nasty details:
Feel free to add suggestions to this list.
From my side, I'd like to discuss about a better integration between DVB and V4L2, including starting using the media controller API on DVB side too. Btw, it would be great if we could get a status about the media controller API usage on ALSA. I'm planning to work at such integration soon.
What about a hardware accelerated decoding API/framework? Is there a proper framework for this at all? I see the broadcom module is still in staging and may never come out of it, but how are other video decoding engines handled that don't have cameras or displays.
Reason for asking is that we from linux-sunxi have made some positive progress in Reverse engineering the video decoder blob of the Allwinner A10 and this knowledge will need a kernel side driver in some framework.
I looked at the exynos video decoders and googling for linux-media hardware accelerated decoding doesn't yield much either.
Anyway, just a thought; if you think it's the wrong place for it to be discussed, that's ok :)
Well, the mem2mem V4L2 devices should provide all that would be needed for accelerated encoders/decoders. If not, then feel free to propose extensionsto fit your needs.
Two comments regarding this:
- V4L2 mem-to-mem is great for frame-based codecs, but SoCs sometimes only
implement part of the codec in hardware, leaving the rest to the software. Encoded bistream parsing is one of those areas that are left to the CPU, for instance on some ST SoCs (CC'ing Benjamin Gaignard).
This is an interesting topic for me as well, although I'm still not sure if I can make it to the workshop. Would it make sense to have v4l parser plugins hook up to qbuf and do the parsing there?
Do you mean in libv4l ?
On Sat, Aug 31, 2013 at 9:03 AM, Laurent Pinchart < laurent.pinchart@ideasonboard.com> wrote:
Hi Pawel,
On Saturday 31 August 2013 08:58:41 Pawel Osciak wrote:
On Sat, Aug 31, 2013 at 1:54 AM, Laurent Pinchart wrote:
On Friday 30 August 2013 10:31:23 Mauro Carvalho Chehab wrote:
Em Fri, 30 Aug 2013 15:21:05 +0200 Oliver Schinagl escreveu:
On 30-08-13 15:01, Hans Verkuil wrote:
OK, I know, we don't even know yet when the mini-summit will be
held
but I thought I'd just start this thread to collect input for the agenda.
I have these topics (and I *know* that I am forgetting a few):
Discuss ideas/use-cases for a property-based API. An initial discussion appeared in this thread:
http://permalink.gmane.org/gmane.linux.drivers.video-input-%3E
infrastructure/65195
What is needed to share i2c video transmitters between drm and v4l? Hopefully we will know more after the upcoming LPC.
Decide on how v4l2 support libraries should be organized.
There is
code for handling raw-to-sliced VBI decoding, ALSA looping, finding associated video/alsa nodes and for TV frequency
tables.
We should decide how that should be organized into libraries
and
how they should be documented. The first two aren't libraries
at
the moment, but I think they should be. The last two are
libraries
but they aren't installed. Some work is also being done on an improved version of the 'associating nodes' library that uses
the
MC if available.
- Define the interaction between selection API, ENUM_FRAMESIZES
and
S_FMT. See this thread for all the nasty details:
http://www.spinics.net/lists/linux-media/msg65137.html
Feel free to add suggestions to this list.
From my side, I'd like to discuss about a better integration between
DVB
and V4L2, including starting using the media controller API on DVB
side
too. Btw, it would be great if we could get a status about the media controller API usage on ALSA. I'm planning to work at such
integration
soon.
What about a hardware accelerated decoding API/framework? Is there
a
proper framework for this at all? I see the broadcom module is
still
in staging and may never come out of it, but how are other video decoding engines handled that don't have cameras or displays.
Reason for asking is that we from linux-sunxi have made some
positive
progress in Reverse engineering the video decoder blob of the Allwinner A10 and this knowledge will need a kernel side driver in some framework.
I looked at the exynos video decoders and googling for linux-media hardware accelerated decoding doesn't yield much either.
Anyway, just a thought; if you think it's the wrong place for it
to be
discussed, that's ok :)
Well, the mem2mem V4L2 devices should provide all that would be
needed
for accelerated encoders/decoders. If not, then feel free to propose extensionsto fit your needs.
Two comments regarding this:
- V4L2 mem-to-mem is great for frame-based codecs, but SoCs sometimes
only
implement part of the codec in hardware, leaving the rest to the
software.
Encoded bistream parsing is one of those areas that are left to the
CPU,
for instance on some ST SoCs (CC'ing Benjamin Gaignard).
This is an interesting topic for me as well, although I'm still not sure
if
I can make it to the workshop. Would it make sense to have v4l parser plugins hook up to qbuf and do the parsing there?
Do you mean in libv4l ?
Yes...
-- Regards,
Laurent Pinchart
On Saturday 31 August 2013 09:04:14 Pawel Osciak wrote:
On Sat, Aug 31, 2013 at 9:03 AM, Laurent Pinchart wrote:
On Saturday 31 August 2013 08:58:41 Pawel Osciak wrote:
On Sat, Aug 31, 2013 at 1:54 AM, Laurent Pinchart wrote:
On Friday 30 August 2013 10:31:23 Mauro Carvalho Chehab wrote:
Em Fri, 30 Aug 2013 15:21:05 +0200 Oliver Schinagl escreveu:
[snip]
What about a hardware accelerated decoding API/framework? Is there a proper framework for this at all? I see the broadcom module is still in staging and may never come out of it, but how are other video decoding engines handled that don't have cameras or displays.
Reason for asking is that we from linux-sunxi have made some positive progress in Reverse engineering the video decoder blob of the Allwinner A10 and this knowledge will need a kernel side driver in some framework.
I looked at the exynos video decoders and googling for linux-media hardware accelerated decoding doesn't yield much either.
Anyway, just a thought; if you think it's the wrong place for it to be discussed, that's ok :)
Well, the mem2mem V4L2 devices should provide all that would be needed for accelerated encoders/decoders. If not, then feel free to propose extensionsto fit your needs.
Two comments regarding this:
- V4L2 mem-to-mem is great for frame-based codecs, but SoCs sometimes only implement part of the codec in hardware, leaving the rest to the software.
Encoded bistream parsing is one of those areas that are left to the CPU, for instance on some ST SoCs (CC'ing Benjamin Gaignard).
This is an interesting topic for me as well, although I'm still not sure if I can make it to the workshop. Would it make sense to have v4l parser plugins hook up to qbuf and do the parsing there?
Do you mean in libv4l ?
Yes...
Let's discuss that in Edinburgh then. The major problem as I see it is that the hardware codec might consume and produce data that wouldn't fit the spirit of the current V4L2 API. We might end up with passing register lists in a V4L2 buffer, which would be pretty ugly.
Benjamin, do you plan to attend the conference ?
Hi all,
Based on STM past experience we have seen variety of userland/kernel or CPU/DSP/Microcontroller split for video codecs. Each time we done proprietary kernel interface because of lack defacto kernel standard. Principal needs were: no memory copy, video codec interface (for example video encoder controls), frame base API, multiformat codecs.
In the past we have seen several hardware partitioning: a) basic CPU/hardware split: all the software run on CPU, basically it is bitstream parsing and preparation of hardware descriptors to call IPs. We made two different implementations: a.1) one fully in kernel embedded in kernel module the drawback was the proprietary API and the bitstream parsing stack reused from legacy project and no compliant to kernel coding guide lines. a.2) an other one was fully in userland with a minimal kernel drivers for write registers and catch interrupts, drawbacks were exposition of hardware registers in userland (no functional API but hardware specific API) and physical address exposed in userland.
b) DSP (or Microcontroller)/hardware split: the software partially run on coprocessor where the firmware handle the IP controls and the CPU do the bitstream parsing. On this implementation all the stack running on CPU was on userland with proprietary API for firmware communication.
After that Exynos S5P show up, with an interesting M2M interface very close to what was done by us on step a.1) and let us hope an incoming standardization for video codecs kernel API. The main benefit we see of this is a reduction of software diversity on top kernel being agnostic to hardware used, for example we could introduce then a unified gstreamer v4l2 decoder plugin or unified OMX decoder plugin.
For us it is important to keep the hardware details as low as possible is software stack (i.e. kernel drivers) instead of a collection of proprietary userland libraries. What we are doing now is trying to go this way for next products.
Regarding S5P MFC all codec software stack remains in firmware, so kernel driver deals only with power/interrupt/clock and firmware communication but no processing are done on input bitstream or output frames. Our split is different because bitstream parsing is left to CPU, it means we put in the kernel significant amount of code to do that. The questions is how to push that code ?
What we have seen also it that several software stacks (ffmpeg, G1, ...) are doing same operation on bitstream (it is logical because it is link to the standards), so what about making it generic to avoid to embed quite the same code on several v4l2 drivers ?
Benjamin (+Hugues in CC)
2013/8/31 Laurent Pinchart laurent.pinchart@ideasonboard.com
On Saturday 31 August 2013 09:04:14 Pawel Osciak wrote:
On Sat, Aug 31, 2013 at 9:03 AM, Laurent Pinchart wrote:
On Saturday 31 August 2013 08:58:41 Pawel Osciak wrote:
On Sat, Aug 31, 2013 at 1:54 AM, Laurent Pinchart wrote:
On Friday 30 August 2013 10:31:23 Mauro Carvalho Chehab wrote:
Em Fri, 30 Aug 2013 15:21:05 +0200 Oliver Schinagl escreveu:
[snip]
> What about a hardware accelerated decoding API/framework? Is
there
> a proper framework for this at all? I see the broadcom module
is
> still in staging and may never come out of it, but how are
other
> video decoding engines handled that don't have cameras or > displays. > > Reason for asking is that we from linux-sunxi have made some > positive progress in Reverse engineering the video decoder
blob of
> the Allwinner A10 and this knowledge will need a kernel side > driver in some framework. > > I looked at the exynos video decoders and googling for
linux-media
> hardware accelerated decoding doesn't yield much either. > > Anyway, just a thought; if you think it's the wrong place for
it
> to be discussed, that's ok :)
Well, the mem2mem V4L2 devices should provide all that would be needed for accelerated encoders/decoders. If not, then feel free
to
propose extensionsto fit your needs.
Two comments regarding this:
- V4L2 mem-to-mem is great for frame-based codecs, but SoCs
sometimes
only implement part of the codec in hardware, leaving the rest to the software.
Encoded bistream parsing is one of those areas that are left to the CPU, for instance on some ST SoCs (CC'ing Benjamin Gaignard).
This is an interesting topic for me as well, although I'm still not
sure
if I can make it to the workshop. Would it make sense to have v4l
parser
plugins hook up to qbuf and do the parsing there?
Do you mean in libv4l ?
Yes...
Let's discuss that in Edinburgh then. The major problem as I see it is that the hardware codec might consume and produce data that wouldn't fit the spirit of the current V4L2 API. We might end up with passing register lists in a V4L2 buffer, which would be pretty ugly.
Benjamin, do you plan to attend the conference ?
-- Regards,
Laurent Pinchart
Hi Benjamin/Hugues,
Em Wed, 04 Sep 2013 10:26:01 +0200 Benjamin Gaignard benjamin.gaignard@linaro.org escreveu:
Hi all,
Based on STM past experience we have seen variety of userland/kernel or CPU/DSP/Microcontroller split for video codecs. Each time we done proprietary kernel interface because of lack defacto kernel standard. Principal needs were: no memory copy, video codec interface (for example video encoder controls), frame base API, multiformat codecs.
In the past we have seen several hardware partitioning: a) basic CPU/hardware split: all the software run on CPU, basically it is bitstream parsing and preparation of hardware descriptors to call IPs. We made two different implementations: a.1) one fully in kernel embedded in kernel module the drawback was the proprietary API and the bitstream parsing stack reused from legacy project and no compliant to kernel coding guide lines. a.2) an other one was fully in userland with a minimal kernel drivers for write registers and catch interrupts, drawbacks were exposition of hardware registers in userland (no functional API but hardware specific API) and physical address exposed in userland.
b) DSP (or Microcontroller)/hardware split: the software partially run on coprocessor where the firmware handle the IP controls and the CPU do the bitstream parsing. On this implementation all the stack running on CPU was on userland with proprietary API for firmware communication.
After that Exynos S5P show up, with an interesting M2M interface very close to what was done by us on step a.1) and let us hope an incoming standardization for video codecs kernel API. The main benefit we see of this is a reduction of software diversity on top kernel being agnostic to hardware used, for example we could introduce then a unified gstreamer v4l2 decoder plugin or unified OMX decoder plugin.
For us it is important to keep the hardware details as low as possible is software stack (i.e. kernel drivers) instead of a collection of proprietary userland libraries. What we are doing now is trying to go this way for next products.
Regarding S5P MFC all codec software stack remains in firmware, so kernel driver deals only with power/interrupt/clock and firmware communication but no processing are done on input bitstream or output frames. Our split is different because bitstream parsing is left to CPU, it means we put in the kernel significant amount of code to do that. The questions is how to push that code ?
What we have seen also it that several software stacks (ffmpeg, G1, ...) are doing same operation on bitstream (it is logical because it is link to the standards), so what about making it generic to avoid to embed quite the same code on several v4l2 drivers ?
I think we need more discussions to understand exactly what kind of bitstream operations are needed to be done on the codec for your hardware.
As already discussed, technically, there are two possible solutions:
1) delegate those tasks to an userspace library (libv4l);
2) do them in Kernel.
Both have vantages and disadvantages. Among others, one of the reasons why we opted to handle different formats in userspace is because we don't do floating point(FP) registers in Kernel. For some kind of codecs, that would be a need.
While it would theoretically be possible to use FP inside the Kernel, that would require lots of work (Kernel currently doesn't save FP status. Also, as Kernel stacks is typically measured in a few KB, pushing them on stack could be a problem).
Also, FP is very arch-dependent, and we try to make the drivers to not be so tight on some specific architecture (of course, when this makes sense). So, the solution would be to have a kernel library for FP, capable of using the hardware registers for each specific arch, but writing it would require lots of time and efforts.
Also, for a normal V4L2 input or output driver, handling the video format on userspace works fine, and the FP issue is automatically solved by gcc FP code.
Due to that, delegating the formats handling to userspace proofed to be the best way for the current drivers.
Yet, if it doesn't fit on your needs, we're open to discuss your proposal.
Btw, this is the kind of discussions that works best on a presencial meeting, where you could show more about your problem and talk about your proposals to address it.
Are you planning to go to the mini-summit? If so, how much time do you need in order to do such discussions?
Benjamin (+Hugues in CC)
2013/8/31 Laurent Pinchart laurent.pinchart@ideasonboard.com
On Saturday 31 August 2013 09:04:14 Pawel Osciak wrote:
On Sat, Aug 31, 2013 at 9:03 AM, Laurent Pinchart wrote:
On Saturday 31 August 2013 08:58:41 Pawel Osciak wrote:
On Sat, Aug 31, 2013 at 1:54 AM, Laurent Pinchart wrote:
On Friday 30 August 2013 10:31:23 Mauro Carvalho Chehab wrote: > Em Fri, 30 Aug 2013 15:21:05 +0200 Oliver Schinagl escreveu:
[snip]
> > What about a hardware accelerated decoding API/framework? Is
there
> > a proper framework for this at all? I see the broadcom module
is
> > still in staging and may never come out of it, but how are
other
> > video decoding engines handled that don't have cameras or > > displays. > > > > Reason for asking is that we from linux-sunxi have made some > > positive progress in Reverse engineering the video decoder
blob of
> > the Allwinner A10 and this knowledge will need a kernel side > > driver in some framework. > > > > I looked at the exynos video decoders and googling for
linux-media
> > hardware accelerated decoding doesn't yield much either. > > > > Anyway, just a thought; if you think it's the wrong place for
it
> > to be discussed, that's ok :) > > Well, the mem2mem V4L2 devices should provide all that would be > needed for accelerated encoders/decoders. If not, then feel free
to
> propose extensionsto fit your needs.
Two comments regarding this:
- V4L2 mem-to-mem is great for frame-based codecs, but SoCs
sometimes
only implement part of the codec in hardware, leaving the rest to the software.
Encoded bistream parsing is one of those areas that are left to the CPU, for instance on some ST SoCs (CC'ing Benjamin Gaignard).
This is an interesting topic for me as well, although I'm still not
sure
if I can make it to the workshop. Would it make sense to have v4l
parser
plugins hook up to qbuf and do the parsing there?
Do you mean in libv4l ?
Yes...
Let's discuss that in Edinburgh then. The major problem as I see it is that the hardware codec might consume and produce data that wouldn't fit the spirit of the current V4L2 API. We might end up with passing register lists in a V4L2 buffer, which would be pretty ugly.
Benjamin, do you plan to attend the conference ?
-- Regards,
Laurent Pinchart
Hi Mauro,
For floating point issue, we have not encountered such issue while integrating various codec (currently H264, MPEG4, VP8 of both Google G1 IP & ST IPs), could you precise which codec you experienced which required FP support ?
For user-space library, problem we encountered is that interface between parsing side (for ex. H264 SPS/PPS decoding, slice header decoding, references frame list management, ...moreover all that is needed to prepare hardware IPs call) and decoder side (hardware IPs handling) is not standardized and differs largely regarding IPs or CPU/copro partitioning. This means that even if we use the standard V4L2 capture interface to inject video bitstream (H264 access units for ex), some proprietary meta are needed to be attached to each buffers, making de facto "un-standard" the V4L2 interface for this driver. Exynos S5P MFC is not attaching any meta to capture input buffers, keeping a standard video bitstream injection interface (what is output naturally by well-known standard demuxers such as gstreamer ones or Android Stagefright ones). This is the way we want to go, we will so keep hardware details at kernel driver side. On the other hand, this simplify drastically the integration of our video drivers on user-land multimedia middleware, reducing the time to market and support needed when reaching our end-customers. Our target is to create a unified gstreamer V4L2 decoder(encoder) plugin and a unified OMX V4L2 decoder(encoder) to fit Android, based on a single V4L2 M2M API whatever hardware IP is.
About mini summit, Benjamin and I are checking internally how to attend to discuss this topic. We think that about half a day is needed to discuss this, we can so share our code and discuss about other codebase you know dealing with video codecs.
Best regards, Hugues.
Hi Hugues,
On Thursday 05 September 2013 13:37:49 Hugues FRUCHET wrote:
Hi Mauro,
For floating point issue, we have not encountered such issue while integrating various codec (currently H264, MPEG4, VP8 of both Google G1 IP & ST IPs), could you precise which codec you experienced which required FP support ?
For user-space library, problem we encountered is that interface between parsing side (for ex. H264 SPS/PPS decoding, slice header decoding, references frame list management, ...moreover all that is needed to prepare hardware IPs call) and decoder side (hardware IPs handling) is not standardized and differs largely regarding IPs or CPU/copro partitioning. This means that even if we use the standard V4L2 capture interface to inject video bitstream (H264 access units for ex), some proprietary meta are needed to be attached to each buffers, making de facto "un-standard" the V4L2 interface for this driver.
We're working on APIs to pass meta data from/to the kernel. The necessary infrastructure is more or less there already, we "just" need to agree on guidelines and standardize the process. One option that will likely be implemented is to store meta-data in a plane, using the multiplanar API.
The resulting plane format will be driver-specific, so we'll loose part of the benefits that the V4L2 API provides. We could try to solve this by writing a libv4l plugin, specific to your driver, that would handle bitstream parsing and fill the meta-data planes correctly. Applications using libv4l would thus only need to pass encoded frames to the library, which would create multiplanar buffers with video data and meta-data, and pass them to the driver. This would be fully transparent for the application.
Exynos S5P MFC is not attaching any meta to capture input buffers, keeping a standard video bitstream injection interface (what is output naturally by well-known standard demuxers such as gstreamer ones or Android Stagefright ones). This is the way we want to go, we will so keep hardware details at kernel driver side. On the other hand, this simplify drastically the integration of our video drivers on user-land multimedia middleware, reducing the time to market and support needed when reaching our end- customers. Our target is to create a unified gstreamer V4L2 decoder(encoder) plugin and a unified OMX V4L2 decoder(encoder) to fit Android, based on a single V4L2 M2M API whatever hardware IP is.
About mini summit, Benjamin and I are checking internally how to attend to discuss this topic. We think that about half a day is needed to discuss this, we can so share our code and discuss about other codebase you know dealing with video codecs.
On Fri, Sep 6, 2013 at 10:45 PM, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
Hi Hugues,
On Thursday 05 September 2013 13:37:49 Hugues FRUCHET wrote:
Hi Mauro,
For floating point issue, we have not encountered such issue while integrating various codec (currently H264, MPEG4, VP8 of both Google G1 IP & ST IPs), could you precise which codec you experienced which required FP support ?
For user-space library, problem we encountered is that interface between parsing side (for ex. H264 SPS/PPS decoding, slice header decoding, references frame list management, ...moreover all that is needed to prepare hardware IPs call) and decoder side (hardware IPs handling) is not standardized and differs largely regarding IPs or CPU/copro partitioning. This means that even if we use the standard V4L2 capture interface to inject video bitstream (H264 access units for ex), some proprietary meta are needed to be attached to each buffers, making de facto "un-standard" the V4L2 interface for this driver.
We're working on APIs to pass meta data from/to the kernel. The necessary infrastructure is more or less there already, we "just" need to agree on guidelines and standardize the process. One option that will likely be implemented is to store meta-data in a plane, using the multiplanar API.
What API is that? Is there an RFC somewhere?
The resulting plane format will be driver-specific, so we'll loose part of the benefits that the V4L2 API provides. We could try to solve this by writing a libv4l plugin, specific to your driver, that would handle bitstream parsing and fill the meta-data planes correctly. Applications using libv4l would thus only need to pass encoded frames to the library, which would create multiplanar buffers with video data and meta-data, and pass them to the driver. This would be fully transparent for the application.
If V4L2 API is not hardware-independent, it's a big loss. If this happens, there will be need for another, middleware API, like OMX IL. This makes V4L2 by itself impractical for real world applications. And the incentives of using V4L2 are gone, because it's much easier to write a custom DRM driver and add any userspace API on top of it. Perhaps this is inevitable, given differences in hardware, but a plugin approach would be a way to keep V4L2 abstract and retain the ability to do the bulk of processing in userspace...
Hi Pawel,
On Saturday 07 September 2013 18:31:17 Pawel Osciak wrote:
On Fri, Sep 6, 2013 at 10:45 PM, Laurent Pinchart wrote:
On Thursday 05 September 2013 13:37:49 Hugues FRUCHET wrote:
Hi Mauro,
For floating point issue, we have not encountered such issue while integrating various codec (currently H264, MPEG4, VP8 of both Google G1 IP & ST IPs), could you precise which codec you experienced which required FP support ?
For user-space library, problem we encountered is that interface between parsing side (for ex. H264 SPS/PPS decoding, slice header decoding, references frame list management, ...moreover all that is needed to prepare hardware IPs call) and decoder side (hardware IPs handling) is not standardized and differs largely regarding IPs or CPU/copro partitioning. This means that even if we use the standard V4L2 capture interface to inject video bitstream (H264 access units for ex), some proprietary meta are needed to be attached to each buffers, making de facto "un-standard" the V4L2 interface for this driver.
We're working on APIs to pass meta data from/to the kernel. The necessary infrastructure is more or less there already, we "just" need to agree on guidelines and standardize the process. One option that will likely be implemented is to store meta-data in a plane, using the multiplanar API.
What API is that? Is there an RFC somewhere?
It has been discussed recently as part of the frame descriptors RFC (http://www.spinics.net/lists/linux-media/msg67295.html).
The resulting plane format will be driver-specific, so we'll loose part of the benefits that the V4L2 API provides. We could try to solve this by writing a libv4l plugin, specific to your driver, that would handle bitstream parsing and fill the meta-data planes correctly. Applications using libv4l would thus only need to pass encoded frames to the library, which would create multiplanar buffers with video data and meta-data, and pass them to the driver. This would be fully transparent for the application.
If V4L2 API is not hardware-independent, it's a big loss. If this happens, there will be need for another, middleware API, like OMX IL. This makes V4L2 by itself impractical for real world applications. And the incentives of using V4L2 are gone, because it's much easier to write a custom DRM driver and add any userspace API on top of it. Perhaps this is inevitable, given differences in hardware, but a plugin approach would be a way to keep V4L2 abstract and retain the ability to do the bulk of processing in userspace...
I believe we can reach that goal with libv4l. The V4L2 kernel API can't abstract all hardware features, as this would require an API level that can't be properly implemented in kernel space, but with libv4l to the rescue we should be pretty good.
Hi Hugues,
On 09/05/2013 01:37 PM, Hugues FRUCHET wrote:
Hi Mauro,
For floating point issue, we have not encountered such issue while integrating various codec (currently H264, MPEG4, VP8 of both Google G1 IP & ST IPs), could you precise which codec you experienced which required FP support ?
For user-space library, problem we encountered is that interface between parsing side (for ex. H264 SPS/PPS decoding, slice header decoding, references frame list management, ...moreover all that is needed to prepare hardware IPs call) and decoder side (hardware IPs handling) is not standardized and differs largely regarding IPs or CPU/copro partitioning. This means that even if we use the standard V4L2 capture interface to inject video bitstream (H264 access units for ex), some proprietary meta are needed to be attached to each buffers, making de facto "un-standard" the V4L2 interface for this driver.
There are lots of drivers (mostly camera drivers) that have non-standard video formats. That's perfectly fine, as long as libv4l plugins/conversions exist to convert it to something that's standardized.
Any application using libv4l doesn't notice the work going on under the hood and it will look like a standard v4l2 driver.
The multiplanar API seems to me to be very suitable for these sort of devices.
Exynos S5P MFC is not attaching any meta to capture input buffers, keeping a standard video bitstream injection interface (what is output naturally by well-known standard demuxers such as gstreamer ones or Android Stagefright ones). This is the way we want to go, we will so keep hardware details at kernel driver side. On the other hand, this simplify drastically the integration of our video drivers on user-land multimedia middleware, reducing the time to market and support needed when reaching our end-customers. Our target is to create a unified gstreamer V4L2 decoder(encoder) plugin and a unified OMX V4L2 decoder(encoder) to fit Android, based on a single V4L2 M2M API whatever hardware IP is.
About mini summit, Benjamin and I are checking internally how to attend to discuss this topic. We think that about half a day is needed to discuss this, we can so share our code and discuss about other codebase you know dealing with video codecs.>
We are getting a lot of topics for the agenda and half a day for one topic seems problematic to me.
One option is to discuss this in a smaller group a day earlier (October 22). We might be able to get a room, or we can discuss it in the hotel lounge or pub :-) or something.
Another option is that ST organizes a separate brainstorm session with a few core developers. We done that in the past quite successfully.
Regards,
Hans
Thanks Hans,
Have you some implementation based on meta that we can check to see code details ? It would be nice to have one with noticeable amount of code/processing made on user-land side. I'm wondering also how libv4l is selecting each driver specific user-land plugin and how they are loaded.
BR. -----Original Message----- From: Hans Verkuil [mailto:hverkuil@xs4all.nl] Sent: lundi 9 septembre 2013 12:33 To: Hugues FRUCHET Cc: Mauro Carvalho Chehab; Oliver Schinagl; media-workshop; Benjamin Gaignard; linux-media@vger.kernel.org Subject: Re: [media-workshop] Agenda for the Edinburgh mini-summit
Hi Hugues,
On 09/05/2013 01:37 PM, Hugues FRUCHET wrote:
Hi Mauro,
For floating point issue, we have not encountered such issue while integrating various codec (currently H264, MPEG4, VP8 of both Google G1 IP & ST IPs), could you precise which codec you experienced which required FP support ?
For user-space library, problem we encountered is that interface between parsing side (for ex. H264 SPS/PPS decoding, slice header decoding, references frame list management, ...moreover all that is needed to prepare hardware IPs call) and decoder side (hardware IPs handling) is not standardized and differs largely regarding IPs or CPU/copro partitioning. This means that even if we use the standard V4L2 capture interface to inject video bitstream (H264 access units for ex), some proprietary meta are needed to be attached to each buffers, making de facto "un-standard" the V4L2 interface for this driver.
There are lots of drivers (mostly camera drivers) that have non-standard video formats. That's perfectly fine, as long as libv4l plugins/conversions exist to convert it to something that's standardized.
Any application using libv4l doesn't notice the work going on under the hood and it will look like a standard v4l2 driver.
The multiplanar API seems to me to be very suitable for these sort of devices.
Exynos S5P MFC is not attaching any meta to capture input buffers, keeping a standard video bitstream injection interface (what is output naturally by well-known standard demuxers such as gstreamer ones or Android Stagefright ones). This is the way we want to go, we will so keep hardware details at kernel driver side. On the other hand, this simplify drastically the integration of our video drivers on user-land multimedia middleware, reducing the time to market and support needed when reaching our end-customers. Our target is to create a unified gstreamer V4L2 decoder(encoder) plugin and a unified OMX V4L2 decoder(encoder) to fit Android, based on a single V4L2 M2M API whatever hardware IP is.
About mini summit, Benjamin and I are checking internally how to attend to discuss this topic. We think that about half a day is needed to discuss this, we can so share our code and discuss about other codebase you know dealing with video codecs.>
We are getting a lot of topics for the agenda and half a day for one topic seems problematic to me.
One option is to discuss this in a smaller group a day earlier (October 22). We might be able to get a room, or we can discuss it in the hotel lounge or pub :-) or something.
Another option is that ST organizes a separate brainstorm session with a few core developers. We done that in the past quite successfully.
Regards,
Hans
On Tue 10 September 2013 09:36:00 Hugues FRUCHET wrote:
Thanks Hans,
Have you some implementation based on meta that we can check to see code details ?
Not as such. Basically you just add another pixelformat define for a multiplanar format. And you define this format as having X video planes and Y planes containing meta data.
It would be nice to have one with noticeable amount of code/processing made on user-land side. I'm wondering also how libv4l is selecting each driver specific user-land plugin and how they are loaded.
libv4l-mplane in v4l-utils.git is an example of a plugin.
Documentation on the plugin API seems to be sparse, but Hans de Goede, Sakari Ailus or Laurent Pinchart know a lot more about it.
There are (to my knowledge) no plugins that do exactly what you want, so you're the first. But it has been designed with your use-case in mind.
Regards,
Hans
BR. -----Original Message----- From: Hans Verkuil [mailto:hverkuil@xs4all.nl] Sent: lundi 9 septembre 2013 12:33 To: Hugues FRUCHET Cc: Mauro Carvalho Chehab; Oliver Schinagl; media-workshop; Benjamin Gaignard; linux-media@vger.kernel.org Subject: Re: [media-workshop] Agenda for the Edinburgh mini-summit
Hi Hugues,
On 09/05/2013 01:37 PM, Hugues FRUCHET wrote:
Hi Mauro,
For floating point issue, we have not encountered such issue while integrating various codec (currently H264, MPEG4, VP8 of both Google G1 IP & ST IPs), could you precise which codec you experienced which required FP support ?
For user-space library, problem we encountered is that interface between parsing side (for ex. H264 SPS/PPS decoding, slice header decoding, references frame list management, ...moreover all that is needed to prepare hardware IPs call) and decoder side (hardware IPs handling) is not standardized and differs largely regarding IPs or CPU/copro partitioning. This means that even if we use the standard V4L2 capture interface to inject video bitstream (H264 access units for ex), some proprietary meta are needed to be attached to each buffers, making de facto "un-standard" the V4L2 interface for this driver.
There are lots of drivers (mostly camera drivers) that have non-standard video formats. That's perfectly fine, as long as libv4l plugins/conversions exist to convert it to something that's standardized.
Any application using libv4l doesn't notice the work going on under the hood and it will look like a standard v4l2 driver.
The multiplanar API seems to me to be very suitable for these sort of devices.
Exynos S5P MFC is not attaching any meta to capture input buffers, keeping a standard video bitstream injection interface (what is output naturally by well-known standard demuxers such as gstreamer ones or Android Stagefright ones). This is the way we want to go, we will so keep hardware details at kernel driver side. On the other hand, this simplify drastically the integration of our video drivers on user-land multimedia middleware, reducing the time to market and support needed when reaching our end-customers. Our target is to create a unified gstreamer V4L2 decoder(encoder) plugin and a unified OMX V4L2 decoder(encoder) to fit Android, based on a single V4L2 M2M API whatever hardware IP is.
About mini summit, Benjamin and I are checking internally how to attend to discuss this topic. We think that about half a day is needed to discuss this, we can so share our code and discuss about other codebase you know dealing with video codecs.>
We are getting a lot of topics for the agenda and half a day for one topic seems problematic to me.
One option is to discuss this in a smaller group a day earlier (October 22). We might be able to get a room, or we can discuss it in the hotel lounge or pub :-) or something.
Another option is that ST organizes a separate brainstorm session with a few core developers. We done that in the past quite successfully.
Regards,
Hans
Hi Hugues,
Do you think it would be possible to discuss this topic in a small group on Monday (October 21st)? Half a day for this during the summit itself is too long, but if we can discuss it on Monday, then we can just present the results of that meeting during the summit.
Monday would be best since then Laurent Pinchart is available (he's attending the ARM summit on Tuesday), and his input would be very useful.
Please let me know asap whether this is an option for you.
Hans, would you be available for this on the Monday as well?
Regards,
Hans
On 09/10/2013 09:54 AM, Hans Verkuil wrote:
On Tue 10 September 2013 09:36:00 Hugues FRUCHET wrote:
Thanks Hans,
Have you some implementation based on meta that we can check to see code details ?
Not as such. Basically you just add another pixelformat define for a multiplanar format. And you define this format as having X video planes and Y planes containing meta data.
It would be nice to have one with noticeable amount of code/processing made on user-land side. I'm wondering also how libv4l is selecting each driver specific user-land plugin and how they are loaded.
libv4l-mplane in v4l-utils.git is an example of a plugin.
Documentation on the plugin API seems to be sparse, but Hans de Goede, Sakari Ailus or Laurent Pinchart know a lot more about it.
There are (to my knowledge) no plugins that do exactly what you want, so you're the first. But it has been designed with your use-case in mind.
Regards,
Hans
BR. -----Original Message----- From: Hans Verkuil [mailto:hverkuil@xs4all.nl] Sent: lundi 9 septembre 2013 12:33 To: Hugues FRUCHET Cc: Mauro Carvalho Chehab; Oliver Schinagl; media-workshop; Benjamin Gaignard; linux-media@vger.kernel.org Subject: Re: [media-workshop] Agenda for the Edinburgh mini-summit
Hi Hugues,
On 09/05/2013 01:37 PM, Hugues FRUCHET wrote:
Hi Mauro,
For floating point issue, we have not encountered such issue while integrating various codec (currently H264, MPEG4, VP8 of both Google G1 IP & ST IPs), could you precise which codec you experienced which required FP support ?
For user-space library, problem we encountered is that interface between parsing side (for ex. H264 SPS/PPS decoding, slice header decoding, references frame list management, ...moreover all that is needed to prepare hardware IPs call) and decoder side (hardware IPs handling) is not standardized and differs largely regarding IPs or CPU/copro partitioning. This means that even if we use the standard V4L2 capture interface to inject video bitstream (H264 access units for ex), some proprietary meta are needed to be attached to each buffers, making de facto "un-standard" the V4L2 interface for this driver.
There are lots of drivers (mostly camera drivers) that have non-standard video formats. That's perfectly fine, as long as libv4l plugins/conversions exist to convert it to something that's standardized.
Any application using libv4l doesn't notice the work going on under the hood and it will look like a standard v4l2 driver.
The multiplanar API seems to me to be very suitable for these sort of devices.
Exynos S5P MFC is not attaching any meta to capture input buffers, keeping a standard video bitstream injection interface (what is output naturally by well-known standard demuxers such as gstreamer ones or Android Stagefright ones). This is the way we want to go, we will so keep hardware details at kernel driver side. On the other hand, this simplify drastically the integration of our video drivers on user-land multimedia middleware, reducing the time to market and support needed when reaching our end-customers. Our target is to create a unified gstreamer V4L2 decoder(encoder) plugin and a unified OMX V4L2 decoder(encoder) to fit Android, based on a single V4L2 M2M API whatever hardware IP is.
About mini summit, Benjamin and I are checking internally how to attend to discuss this topic. We think that about half a day is needed to discuss this, we can so share our code and discuss about other codebase you know dealing with video codecs.>
We are getting a lot of topics for the agenda and half a day for one topic seems problematic to me.
One option is to discuss this in a smaller group a day earlier (October 22). We might be able to get a room, or we can discuss it in the hotel lounge or pub :-) or something.
Another option is that ST organizes a separate brainstorm session with a few core developers. We done that in the past quite successfully.
Regards,
Hans
media-workshop mailing list media-workshop@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/media-workshop
Hi Hans, Unfortunately I'm not available on Monday, I can only Wednesday the 23th. Regarding discussions we already had, half a day is certainly too long, we can shorten it to 1 or 2 hours I think.
BR. -----Original Message----- From: Hans Verkuil [mailto:hverkuil@xs4all.nl] Sent: lundi 23 septembre 2013 12:37 To: Hugues FRUCHET Cc: Oliver Schinagl; linux-media@vger.kernel.org; media-workshop; Benjamin Gaignard; hdegoede@redhat.com Subject: Re: [media-workshop] Agenda for the Edinburgh mini-summit
Hi Hugues,
Do you think it would be possible to discuss this topic in a small group on Monday (October 21st)? Half a day for this during the summit itself is too long, but if we can discuss it on Monday, then we can just present the results of that meeting during the summit.
Monday would be best since then Laurent Pinchart is available (he's attending the ARM summit on Tuesday), and his input would be very useful.
Please let me know asap whether this is an option for you.
Hans, would you be available for this on the Monday as well?
Regards,
Hans
On 09/10/2013 09:54 AM, Hans Verkuil wrote:
On Tue 10 September 2013 09:36:00 Hugues FRUCHET wrote:
Thanks Hans,
Have you some implementation based on meta that we can check to see code details ?
Not as such. Basically you just add another pixelformat define for a multiplanar format. And you define this format as having X video planes and Y planes containing meta data.
It would be nice to have one with noticeable amount of code/processing made on user-land side. I'm wondering also how libv4l is selecting each driver specific user-land plugin and how they are loaded.
libv4l-mplane in v4l-utils.git is an example of a plugin.
Documentation on the plugin API seems to be sparse, but Hans de Goede, Sakari Ailus or Laurent Pinchart know a lot more about it.
There are (to my knowledge) no plugins that do exactly what you want, so you're the first. But it has been designed with your use-case in mind.
Regards,
Hans
BR. -----Original Message----- From: Hans Verkuil [mailto:hverkuil@xs4all.nl] Sent: lundi 9 septembre 2013 12:33 To: Hugues FRUCHET Cc: Mauro Carvalho Chehab; Oliver Schinagl; media-workshop; Benjamin Gaignard; linux-media@vger.kernel.org Subject: Re: [media-workshop] Agenda for the Edinburgh mini-summit
Hi Hugues,
On 09/05/2013 01:37 PM, Hugues FRUCHET wrote:
Hi Mauro,
For floating point issue, we have not encountered such issue while integrating various codec (currently H264, MPEG4, VP8 of both Google G1 IP & ST IPs), could you precise which codec you experienced which required FP support ?
For user-space library, problem we encountered is that interface between parsing side (for ex. H264 SPS/PPS decoding, slice header decoding, references frame list management, ...moreover all that is needed to prepare hardware IPs call) and decoder side (hardware IPs handling) is not standardized and differs largely regarding IPs or CPU/copro partitioning. This means that even if we use the standard V4L2 capture interface to inject video bitstream (H264 access units for ex), some proprietary meta are needed to be attached to each buffers, making de facto "un-standard" the V4L2 interface for this driver.
There are lots of drivers (mostly camera drivers) that have non-standard video formats. That's perfectly fine, as long as libv4l plugins/conversions exist to convert it to something that's standardized.
Any application using libv4l doesn't notice the work going on under the hood and it will look like a standard v4l2 driver.
The multiplanar API seems to me to be very suitable for these sort of devices.
Exynos S5P MFC is not attaching any meta to capture input buffers, keeping a standard video bitstream injection interface (what is output naturally by well-known standard demuxers such as gstreamer ones or Android Stagefright ones). This is the way we want to go, we will so keep hardware details at kernel driver side. On the other hand, this simplify drastically the integration of our video drivers on user-land multimedia middleware, reducing the time to market and support needed when reaching our end-customers. Our target is to create a unified gstreamer V4L2 decoder(encoder) plugin and a unified OMX V4L2 decoder(encoder) to fit Android, based on a single V4L2 M2M API whatever hardware IP is.
About mini summit, Benjamin and I are checking internally how to attend to discuss this topic. We think that about half a day is needed to discuss this, we can so share our code and discuss about other codebase you know dealing with video codecs.>
We are getting a lot of topics for the agenda and half a day for one topic seems problematic to me.
One option is to discuss this in a smaller group a day earlier (October 22). We might be able to get a room, or we can discuss it in the hotel lounge or pub :-) or something.
Another option is that ST organizes a separate brainstorm session with a few core developers. We done that in the past quite successfully.
Regards,
Hans
media-workshop mailing list media-workshop@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/media-workshop
On 09/23/2013 03:00 PM, Hugues FRUCHET wrote:
Hi Hans, Unfortunately I'm not available on Monday, I can only Wednesday the 23th.
OK.
Regarding discussions we already had, half a day is certainly too long, we can shorten it to 1 or 2 hours I think.
I'll try to squeeze it into the agenda. I plan on posting a more detailed agenda with timeslots later this week.
Are you available for further discussions during (part of) some other day? 24th or 25th perhaps? Just in case we need more time to work on this.
Regards,
Hans
BR. -----Original Message----- From: Hans Verkuil [mailto:hverkuil@xs4all.nl] Sent: lundi 23 septembre 2013 12:37 To: Hugues FRUCHET Cc: Oliver Schinagl; linux-media@vger.kernel.org; media-workshop; Benjamin Gaignard; hdegoede@redhat.com Subject: Re: [media-workshop] Agenda for the Edinburgh mini-summit
Hi Hugues,
Do you think it would be possible to discuss this topic in a small group on Monday (October 21st)? Half a day for this during the summit itself is too long, but if we can discuss it on Monday, then we can just present the results of that meeting during the summit.
Monday would be best since then Laurent Pinchart is available (he's attending the ARM summit on Tuesday), and his input would be very useful.
Please let me know asap whether this is an option for you.
Hans, would you be available for this on the Monday as well?
Regards,
Hans
On 09/10/2013 09:54 AM, Hans Verkuil wrote:
On Tue 10 September 2013 09:36:00 Hugues FRUCHET wrote:
Thanks Hans,
Have you some implementation based on meta that we can check to see code details ?
Not as such. Basically you just add another pixelformat define for a multiplanar format. And you define this format as having X video planes and Y planes containing meta data.
It would be nice to have one with noticeable amount of code/processing made on user-land side. I'm wondering also how libv4l is selecting each driver specific user-land plugin and how they are loaded.
libv4l-mplane in v4l-utils.git is an example of a plugin.
Documentation on the plugin API seems to be sparse, but Hans de Goede, Sakari Ailus or Laurent Pinchart know a lot more about it.
There are (to my knowledge) no plugins that do exactly what you want, so you're the first. But it has been designed with your use-case in mind.
Regards,
Hans
BR. -----Original Message----- From: Hans Verkuil [mailto:hverkuil@xs4all.nl] Sent: lundi 9 septembre 2013 12:33 To: Hugues FRUCHET Cc: Mauro Carvalho Chehab; Oliver Schinagl; media-workshop; Benjamin Gaignard; linux-media@vger.kernel.org Subject: Re: [media-workshop] Agenda for the Edinburgh mini-summit
Hi Hugues,
On 09/05/2013 01:37 PM, Hugues FRUCHET wrote:
Hi Mauro,
For floating point issue, we have not encountered such issue while integrating various codec (currently H264, MPEG4, VP8 of both Google G1 IP & ST IPs), could you precise which codec you experienced which required FP support ?
For user-space library, problem we encountered is that interface between parsing side (for ex. H264 SPS/PPS decoding, slice header decoding, references frame list management, ...moreover all that is needed to prepare hardware IPs call) and decoder side (hardware IPs handling) is not standardized and differs largely regarding IPs or CPU/copro partitioning. This means that even if we use the standard V4L2 capture interface to inject video bitstream (H264 access units for ex), some proprietary meta are needed to be attached to each buffers, making de facto "un-standard" the V4L2 interface for this driver.
There are lots of drivers (mostly camera drivers) that have non-standard video formats. That's perfectly fine, as long as libv4l plugins/conversions exist to convert it to something that's standardized.
Any application using libv4l doesn't notice the work going on under the hood and it will look like a standard v4l2 driver.
The multiplanar API seems to me to be very suitable for these sort of devices.
Exynos S5P MFC is not attaching any meta to capture input buffers, keeping a standard video bitstream injection interface (what is output naturally by well-known standard demuxers such as gstreamer ones or Android Stagefright ones). This is the way we want to go, we will so keep hardware details at kernel driver side. On the other hand, this simplify drastically the integration of our video drivers on user-land multimedia middleware, reducing the time to market and support needed when reaching our end-customers. Our target is to create a unified gstreamer V4L2 decoder(encoder) plugin and a unified OMX V4L2 decoder(encoder) to fit Android, based on a single V4L2 M2M API whatever hardware IP is.
About mini summit, Benjamin and I are checking internally how to attend to discuss this topic. We think that about half a day is needed to discuss this, we can so share our code and discuss about other codebase you know dealing with video codecs.>
We are getting a lot of topics for the agenda and half a day for one topic seems problematic to me.
One option is to discuss this in a smaller group a day earlier (October 22). We might be able to get a room, or we can discuss it in the hotel lounge or pub :-) or something.
Another option is that ST organizes a separate brainstorm session with a few core developers. We done that in the past quite successfully.
Regards,
Hans
media-workshop mailing list media-workshop@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/media-workshop
Hi,
On 09/23/2013 12:36 PM, Hans Verkuil wrote:
Hi Hugues,
Do you think it would be possible to discuss this topic in a small group on Monday (October 21st)? Half a day for this during the summit itself is too long, but if we can discuss it on Monday, then we can just present the results of that meeting during the summit.
Monday would be best since then Laurent Pinchart is available (he's attending the ARM summit on Tuesday), and his input would be very useful.
Please let me know asap whether this is an option for you.
Hans, would you be available for this on the Monday as well?
I can't tell atm, Monday is a kvm-forum day, and I'm a kvm-forum speaker, so it depends on the kvm-forum schedule, which is has not been published yet.
Regards,
Hans
On Sun, Sep 1, 2013 at 5:19 AM, Laurent Pinchart < laurent.pinchart@ideasonboard.com> wrote:
On Saturday 31 August 2013 09:04:14 Pawel Osciak wrote:
On Sat, Aug 31, 2013 at 9:03 AM, Laurent Pinchart wrote:
On Saturday 31 August 2013 08:58:41 Pawel Osciak wrote:
On Sat, Aug 31, 2013 at 1:54 AM, Laurent Pinchart wrote:
On Friday 30 August 2013 10:31:23 Mauro Carvalho Chehab wrote:
Em Fri, 30 Aug 2013 15:21:05 +0200 Oliver Schinagl escreveu:
[snip]
> What about a hardware accelerated decoding API/framework? Is
there
> a proper framework for this at all? I see the broadcom module
is
> still in staging and may never come out of it, but how are
other
> video decoding engines handled that don't have cameras or > displays. > > Reason for asking is that we from linux-sunxi have made some > positive progress in Reverse engineering the video decoder
blob of
> the Allwinner A10 and this knowledge will need a kernel side > driver in some framework. > > I looked at the exynos video decoders and googling for
linux-media
> hardware accelerated decoding doesn't yield much either. > > Anyway, just a thought; if you think it's the wrong place for
it
> to be discussed, that's ok :)
Well, the mem2mem V4L2 devices should provide all that would be needed for accelerated encoders/decoders. If not, then feel free
to
propose extensionsto fit your needs.
Two comments regarding this:
- V4L2 mem-to-mem is great for frame-based codecs, but SoCs
sometimes
only implement part of the codec in hardware, leaving the rest to the software.
Encoded bistream parsing is one of those areas that are left to the CPU, for instance on some ST SoCs (CC'ing Benjamin Gaignard).
This is an interesting topic for me as well, although I'm still not
sure
if I can make it to the workshop. Would it make sense to have v4l
parser
plugins hook up to qbuf and do the parsing there?
Do you mean in libv4l ?
Yes...
Let's discuss that in Edinburgh then. The major problem as I see it is that the hardware codec might consume and produce data that wouldn't fit the spirit of the current V4L2 API. We might end up with passing register lists in a V4L2 buffer, which would be pretty ugly.
Well, this is exactly what I'm wondering about. What should we suggest vendors who have devices that require preprocessing in the userspace do...
Benjamin, do you plan to attend the conference ?
-- Regards,
Laurent Pinchart
On 08/30/2013 03:21 PM, Oliver Schinagl wrote:
On 30-08-13 15:01, Hans Verkuil wrote:
OK, I know, we don't even know yet when the mini-summit will be held but I thought I'd just start this thread to collect input for the agenda.
I have these topics (and I *know* that I am forgetting a few):
Discuss ideas/use-cases for a property-based API. An initial discussion appeared in this thread:
http://permalink.gmane.org/gmane.linux.drivers.video-input-infrastructure/65...
What is needed to share i2c video transmitters between drm and v4l? Hopefully we will know more after the upcoming LPC.
Decide on how v4l2 support libraries should be organized. There is code for handling raw-to-sliced VBI decoding, ALSA looping, finding associated video/alsa nodes and for TV frequency tables. We should decide how that should be organized into libraries and how they should be documented. The first two aren't libraries at the moment, but I think they should be. The last two are libraries but they aren't installed. Some work is also being done on an improved version of the 'associating nodes' library that uses the MC if available.
Define the interaction between selection API, ENUM_FRAMESIZES and S_FMT. See this thread for all the nasty details:
Feel free to add suggestions to this list.
What about a hardware accelerated decoding API/framework? Is there a proper framework for this at all? I see the broadcom module is still in staging and may never come out of it, but how are other video decoding engines handled that don't have cameras or displays.
Reason for asking is that we from linux-sunxi have made some positive progress in Reverse engineering the video decoder blob of the Allwinner A10 and this knowledge will need a kernel side driver in some framework. I looked at the exynos video decoders and googling for linux-media hardware accelerated decoding doesn't yield much either.
Anyway, just a thought; if you think it's the wrong place for it to be discussed, that's ok :)
No, this is the right place. See http://hverkuil.home.xs4all.nl/spec/media.html#codec for more information.
For the longest time that section in the spec said that that interface was 'Suspended'. That was only corrected in 3.10 or 3.11 even though actual codec support has been around for much longer. There are many v4l2 drivers today that do this. Just grep for V4L2_CAP_VIDEO_M2M in drivers/media.
Codec drivers are really just a video node that can capture and output at the same time, and has a lot of codec controls (http://hverkuil.home.xs4all.nl/spec/media.html#mpeg-controls) to tweak codec parameters.
Just post any questions you have regarding this to the linux-media mailinglist, we're happy to help out.
Regards,
Hans
oliver
Note: my email availability will be limited in the next three weeks, especially next week, as I am travelling a lot.
Regards,
Hans
To unsubscribe from this list: send the line "unsubscribe linux-media" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 08/30/2013 03:01 PM, Hans Verkuil wrote:
OK, I know, we don't even know yet when the mini-summit will be held but I thought I'd just start this thread to collect input for the agenda.
I have these topics (and I *know* that I am forgetting a few):
Discuss ideas/use-cases for a property-based API. An initial discussion appeared in this thread:
http://permalink.gmane.org/gmane.linux.drivers.video-input-infrastructure/65...
What is needed to share i2c video transmitters between drm and v4l? Hopefully we will know more after the upcoming LPC.
Decide on how v4l2 support libraries should be organized. There is code for handling raw-to-sliced VBI decoding, ALSA looping, finding associated video/alsa nodes and for TV frequency tables. We should decide how that should be organized into libraries and how they should be documented. The first two aren't libraries at the moment, but I think they should be. The last two are libraries but they aren't installed. Some work is also being done on an improved version of the 'associating nodes' library that uses the MC if available.
Define the interaction between selection API, ENUM_FRAMESIZES and S_FMT. See this thread for all the nasty details:
Feel free to add suggestions to this list.
I got another one:
VIDIOC_TRY_FMT shouldn't return -EINVAL when an unsupported pixelformat is provided, but in practice video capture board tend to do that, while webcam drivers tend to map it silently to a valid pixelformat. Some applications rely on the -EINVAL error code.
We need to decide how to adjust the spec. I propose to just say that some drivers will map it silently and others will return -EINVAL and that you don't know what a driver will do. Also specify that an unsupported pixelformat is the only reason why TRY_FMT might return -EINVAL.
Alternatively we might want to specify explicitly that EINVAL should be returned for video capture devices (i.e. devices supporting S_STD or S_DV_TIMINGS) and 0 for all others.
Regards,
Hans
On Sat, 31 Aug 2013, Hans Verkuil wrote:
On 08/30/2013 03:01 PM, Hans Verkuil wrote:
OK, I know, we don't even know yet when the mini-summit will be held but I thought I'd just start this thread to collect input for the agenda.
I have these topics (and I *know* that I am forgetting a few):
Discuss ideas/use-cases for a property-based API. An initial discussion appeared in this thread:
http://permalink.gmane.org/gmane.linux.drivers.video-input-infrastructure/65...
What is needed to share i2c video transmitters between drm and v4l? Hopefully we will know more after the upcoming LPC.
Decide on how v4l2 support libraries should be organized. There is code for handling raw-to-sliced VBI decoding, ALSA looping, finding associated video/alsa nodes and for TV frequency tables. We should decide how that should be organized into libraries and how they should be documented. The first two aren't libraries at the moment, but I think they should be. The last two are libraries but they aren't installed. Some work is also being done on an improved version of the 'associating nodes' library that uses the MC if available.
Define the interaction between selection API, ENUM_FRAMESIZES and S_FMT. See this thread for all the nasty details:
Feel free to add suggestions to this list.
I got another one:
VIDIOC_TRY_FMT shouldn't return -EINVAL when an unsupported pixelformat is provided, but in practice video capture board tend to do that, while webcam drivers tend to map it silently to a valid pixelformat. Some applications rely on the -EINVAL error code.
We need to decide how to adjust the spec. I propose to just say that some drivers will map it silently and others will return -EINVAL and that you don't know what a driver will do. Also specify that an unsupported pixelformat is the only reason why TRY_FMT might return -EINVAL.
Alternatively we might want to specify explicitly that EINVAL should be returned for video capture devices (i.e. devices supporting S_STD or S_DV_TIMINGS) and 0 for all others.
Just to make sure I understand right - that kind of excludes cameras, right? Still, even for (other) video capture devices, like TV decoders, is there a real serious enough reason to _change_ the specs, which says
http://linuxtv.org/downloads/v4l-dvb-apis/vidioc-g-fmt.html
EINVAL
The struct v4l2_format type field is invalid or the requested buffer type not supported.
If we have a spec, that says A, and some drivers drivers do A, but others do B, and we want to change the specs to B? Instead of either changing the (wrong) drivers to A (yes, some applications expect that wrong behaviour) or at least extending the spec to allow both A and B?
Thanks Guennadi --- Guennadi Liakhovetski, Ph.D. Freelance Open-Source Software Developer http://www.open-technology.de/
Hi Guennadi,
On Saturday 31 August 2013 20:38:54 Guennadi Liakhovetski wrote:
On Sat, 31 Aug 2013, Hans Verkuil wrote:
On 08/30/2013 03:01 PM, Hans Verkuil wrote:
OK, I know, we don't even know yet when the mini-summit will be held but I thought I'd just start this thread to collect input for the agenda.
I have these topics (and I *know* that I am forgetting a few):
[snip]
Feel free to add suggestions to this list.
I got another one:
VIDIOC_TRY_FMT shouldn't return -EINVAL when an unsupported pixelformat is provided, but in practice video capture board tend to do that, while webcam drivers tend to map it silently to a valid pixelformat. Some applications rely on the -EINVAL error code.
We need to decide how to adjust the spec. I propose to just say that some drivers will map it silently and others will return -EINVAL and that you don't know what a driver will do. Also specify that an unsupported pixelformat is the only reason why TRY_FMT might return -EINVAL.
Alternatively we might want to specify explicitly that EINVAL should be returned for video capture devices (i.e. devices supporting S_STD or S_DV_TIMINGS) and 0 for all others.
Just to make sure I understand right - that kind of excludes cameras, right? Still, even for (other) video capture devices, like TV decoders, is there a real serious enough reason to _change_ the specs, which says
http://linuxtv.org/downloads/v4l-dvb-apis/vidioc-g-fmt.html
EINVAL
The struct v4l2_format type field is invalid or the requested buffer
type not supported.
I think Hans meant unsupported fmt.pix.pixelformat (or the equivalent for multiplane) values. For instance the uvcvideo driver will return a default fourcc if an application tries an unsupported fourcc, some other drivers return -EINVAL.
If we have a spec, that says A, and some drivers drivers do A, but others do B, and we want to change the specs to B? Instead of either changing the (wrong) drivers to A (yes, some applications expect that wrong behaviour) or at least extending the spec to allow both A and B?
On Sat, 31 Aug 2013, Laurent Pinchart wrote:
Hi Guennadi,
On Saturday 31 August 2013 20:38:54 Guennadi Liakhovetski wrote:
On Sat, 31 Aug 2013, Hans Verkuil wrote:
On 08/30/2013 03:01 PM, Hans Verkuil wrote:
OK, I know, we don't even know yet when the mini-summit will be held but I thought I'd just start this thread to collect input for the agenda.
I have these topics (and I *know* that I am forgetting a few):
[snip]
Feel free to add suggestions to this list.
I got another one:
VIDIOC_TRY_FMT shouldn't return -EINVAL when an unsupported pixelformat is provided, but in practice video capture board tend to do that, while webcam drivers tend to map it silently to a valid pixelformat. Some applications rely on the -EINVAL error code.
We need to decide how to adjust the spec. I propose to just say that some drivers will map it silently and others will return -EINVAL and that you don't know what a driver will do. Also specify that an unsupported pixelformat is the only reason why TRY_FMT might return -EINVAL.
Alternatively we might want to specify explicitly that EINVAL should be returned for video capture devices (i.e. devices supporting S_STD or S_DV_TIMINGS) and 0 for all others.
Just to make sure I understand right - that kind of excludes cameras, right? Still, even for (other) video capture devices, like TV decoders, is there a real serious enough reason to _change_ the specs, which says
http://linuxtv.org/downloads/v4l-dvb-apis/vidioc-g-fmt.html
EINVAL
The struct v4l2_format type field is invalid or the requested buffer
type not supported.
I think Hans meant unsupported fmt.pix.pixelformat (or the equivalent for multiplane) values.
Good, then I understood him correctly :)
For instance the uvcvideo driver will return a default fourcc if an application tries an unsupported fourcc,
Yes, that's what I would do too and that's what the spec dictates.
some other drivers return -EINVAL.
that just seems plain wrong to me. So, as I said, to not break the userspace we can extend the specs, but not prohibit the currently defined behaviour. So, that last option:
Alternatively we might want to specify explicitly that EINVAL should be returned for video capture devices (i.e. devices supporting S_STD or S_DV_TIMINGS) and 0 for all others.
I'm not sure I like a lot, unless those drivers are very special and they all already behave like that.
Thanks Guennadi
If we have a spec, that says A, and some drivers drivers do A, but others do B, and we want to change the specs to B? Instead of either changing the (wrong) drivers to A (yes, some applications expect that wrong behaviour) or at least extending the spec to allow both A and B?
-- Regards,
Laurent Pinchart
--- Guennadi Liakhovetski, Ph.D. Freelance Open-Source Software Developer http://www.open-technology.de/
On 08/31/2013 10:36 PM, Guennadi Liakhovetski wrote:
On Sat, 31 Aug 2013, Laurent Pinchart wrote:
Hi Guennadi,
On Saturday 31 August 2013 20:38:54 Guennadi Liakhovetski wrote:
On Sat, 31 Aug 2013, Hans Verkuil wrote:
On 08/30/2013 03:01 PM, Hans Verkuil wrote:
OK, I know, we don't even know yet when the mini-summit will be held but I thought I'd just start this thread to collect input for the agenda.
I have these topics (and I *know* that I am forgetting a few):
[snip]
Feel free to add suggestions to this list.
I got another one:
VIDIOC_TRY_FMT shouldn't return -EINVAL when an unsupported pixelformat is provided, but in practice video capture board tend to do that, while webcam drivers tend to map it silently to a valid pixelformat. Some applications rely on the -EINVAL error code.
We need to decide how to adjust the spec. I propose to just say that some drivers will map it silently and others will return -EINVAL and that you don't know what a driver will do. Also specify that an unsupported pixelformat is the only reason why TRY_FMT might return -EINVAL.
Alternatively we might want to specify explicitly that EINVAL should be returned for video capture devices (i.e. devices supporting S_STD or S_DV_TIMINGS) and 0 for all others.
Just to make sure I understand right - that kind of excludes cameras, right? Still, even for (other) video capture devices, like TV decoders, is there a real serious enough reason to _change_ the specs, which says
http://linuxtv.org/downloads/v4l-dvb-apis/vidioc-g-fmt.html
EINVAL
The struct v4l2_format type field is invalid or the requested buffer
type not supported.
I think Hans meant unsupported fmt.pix.pixelformat (or the equivalent for multiplane) values.
Good, then I understood him correctly :)
For instance the uvcvideo driver will return a default fourcc if an application tries an unsupported fourcc,
Yes, that's what I would do too and that's what the spec dictates.
some other drivers return -EINVAL.
that just seems plain wrong to me. So, as I said, to not break the userspace we can extend the specs, but not prohibit the currently defined behaviour. So, that last option:
Alternatively we might want to specify explicitly that EINVAL should be returned for video capture devices (i.e. devices supporting S_STD or S_DV_TIMINGS) and 0 for all others.
I'm not sure I like a lot, unless those drivers are very special and they all already behave like that.
Almost (have to check though) all TV capture drivers behave like that, yes. Very unfortunate.
On the other hand webcam apps must assume that TRY_FMT will just map an unsupported pixel format to a valid pixel format since that is what uvc does. And a webcam app that doesn't support uvc can't be called a webcam app :-)
Regards,
Hans
Thanks Guennadi
If we have a spec, that says A, and some drivers drivers do A, but others do B, and we want to change the specs to B? Instead of either changing the (wrong) drivers to A (yes, some applications expect that wrong behaviour) or at least extending the spec to allow both A and B?
-- Regards,
Laurent Pinchart
Guennadi Liakhovetski, Ph.D. Freelance Open-Source Software Developer http://www.open-technology.de/
On Fri, Aug 30, 2013 at 03:01:25PM +0200, Hans Verkuil wrote:
OK, I know, we don't even know yet when the mini-summit will be held but I thought I'd just start this thread to collect input for the agenda.
I have these topics (and I *know* that I am forgetting a few):
Discuss ideas/use-cases for a property-based API. An initial discussion appeared in this thread:
http://permalink.gmane.org/gmane.linux.drivers.video-input-infrastructure/65...
What is needed to share i2c video transmitters between drm and v4l? Hopefully we will know more after the upcoming LPC.
Decide on how v4l2 support libraries should be organized. There is code for handling raw-to-sliced VBI decoding, ALSA looping, finding associated video/alsa nodes and for TV frequency tables. We should decide how that should be organized into libraries and how they should be documented. The first two aren't libraries at the moment, but I think they should be. The last two are libraries but they aren't installed. Some work is also being done on an improved version of the 'associating nodes' library that uses the MC if available.
Define the interaction between selection API, ENUM_FRAMESIZES and S_FMT. See this thread for all the nasty details:
- Multi-format frames and metadata. Support would be needed on video nodes and V4L2 subdev nodes. I'll prepare the RFC for the former; the latter has an RFC here:
URL:http://www.spinics.net/lists/linux-media/msg67295.html