Hi all,
I have collected all suggested topics. If I missed any, then let me know.
There seem to be three groups of topics: one has general topics, one is related to the compound controls, per frame configuration and Android, and one topic is about complex video pipelines.
Note that there are currently no DVB topics at all.
I have grouped the suggestions into those three high-level topics. If you disagree with where I put it, then let me know.
Also let me know your estimate of how long you think the discussion will take. I know it can be hard to pick a time, but it tends to average out anyway.
If you put up a topic, then you are expected to prepare for it, either by making a small presentation, or at least to think about what you want to say. We have only two days and I don't want to waste any time.
Speaking for myself: on one of the two days I have to give a lightning talk in the gstreamer conference and I want to attend Nicolas' gstreamer presentation as well. I don't know the times and dates yet.
If you have suggested a topic and you also have specific times you won't be available, then let me know as soon as you have the details.
The plan is to start each day at 9 am and go on to 5 pm or so, depending on how things work out.
Regards,
Hans
============= General Topics ==================
Hans Verkuil:
- A presentation on colorspaces (Do *you* know what to put in the v4l2_pix_format colorspace field? And why should it matter?) 30 min - media development process: what works, what doesn't 10-15 min
Ricardo: - multiple timestamps per buffer
============== Per-frame configuration/camera HAL v3/Compound control types ============
Ricardo: - multi selections and dead pixel api
Hans Verkuil:
- update on configure stores for the control framework 15 min
Sakari: - Android camera HAL API v3 and what kind of requirements it brings to V4L2.
Pawel:
- Existing codec API ambiguities (does this belong to this topic?) - A proposal for a new codec API extension/mode for HW codecs that can't parse elementary streams
Guennadi: - camera support in Android (does this belong to this topic?)
=========== Complex video pipeline drivers topic ===================
Hans Verkuil: - create virtual complex omap3-like driver: what is needed 15 min
Laurent & Chris Kohn: - runtime reconfiguration of pipelines
Philip Zabel: - Helping userspace to use mem2mem devices; clarification of encoder/decoder handling, clarification of format/size setting in case of dependencies between input and output formats, possibly broad categorisation of mem2mem devices (encoder, decoder, scaler, rotator, csc/filter, ...)
- Hierarchical media devices - what if you have a lot of media entities and some of them are more closely related to each other than others.
Julien Beraud: -Highly reconfigureable hardware devices and the possibility to create media-links and virtual subdevices in order to simplify userland vision.
Nicolas Dufresne: - I'll like to discuss frame size enumeration for scaling m2m (e.g. fimc/gscaler and CUDA decoder).
Em Mon, 22 Sep 2014 13:42:44 +0200 Hans Verkuil hverkuil@xs4all.nl escreveu:
Hi all,
I have collected all suggested topics. If I missed any, then let me know.
There seem to be three groups of topics: one has general topics, one is related to the compound controls, per frame configuration and Android, and one topic is about complex video pipelines.
Note that there are currently no DVB topics at all.
I have a few topics for DVB:
- I'd like to talk about the libdvbv5 - this will be a sort of presentation (about 30 mins is likely enough);
- The new tuner binding model and how to couple it to the media controller API for DVB (30 mins to 50 mins);
- I also would like to get suggestions about DVB API improvements, in special related to the demod API, but we may also need to discuss other improvements, like, for example, the support for DVB-C2 (I would reserve about 50 mins, as those discussions tend to take some time).
Btw, we also need to get a list of attendees, as we might need to request a bigger room depending on the number of people.
Regards, Mauro
I have grouped the suggestions into those three high-level topics. If you disagree with where I put it, then let me know.
Also let me know your estimate of how long you think the discussion will take. I know it can be hard to pick a time, but it tends to average out anyway.
If you put up a topic, then you are expected to prepare for it, either by making a small presentation, or at least to think about what you want to say. We have only two days and I don't want to waste any time.
Speaking for myself: on one of the two days I have to give a lightning talk in the gstreamer conference and I want to attend Nicolas' gstreamer presentation as well. I don't know the times and dates yet.
I'm also thinking on giving a lightning talk at gst conf about libdvbv5.
If you have suggested a topic and you also have specific times you won't be available, then let me know as soon as you have the details.
The plan is to start each day at 9 am and go on to 5 pm or so, depending on how things work out.
Regards,
Hans
============= General Topics ==================
Hans Verkuil:
- A presentation on colorspaces (Do *you* know what to put in the v4l2_pix_format colorspace field? And why should it matter?) 30 min
- media development process: what works, what doesn't 10-15 min
Ricardo:
- multiple timestamps per buffer
============== Per-frame configuration/camera HAL v3/Compound control types ============
Ricardo:
- multi selections and dead pixel api
Hans Verkuil:
- update on configure stores for the control framework 15 min
Sakari:
- Android camera HAL API v3 and what kind of requirements it brings to V4L2.
Pawel:
- Existing codec API ambiguities (does this belong to this topic?)
- A proposal for a new codec API extension/mode for HW codecs that
can't parse elementary streams
Guennadi:
- camera support in Android (does this belong to this topic?)
=========== Complex video pipeline drivers topic ===================
Hans Verkuil:
- create virtual complex omap3-like driver: what is needed 15 min
Laurent & Chris Kohn:
- runtime reconfiguration of pipelines
Philip Zabel:
Helping userspace to use mem2mem devices; clarification of encoder/decoder handling, clarification of format/size setting in case of dependencies between input and output formats, possibly broad categorisation of mem2mem devices (encoder, decoder, scaler, rotator, csc/filter, ...)
Hierarchical media devices - what if you have a lot of media entities and some of them are more closely related to each other than others.
Julien Beraud: -Highly reconfigureable hardware devices and the possibility to create media-links and virtual subdevices in order to simplify userland vision.
Nicolas Dufresne:
- I'll like to discuss frame size enumeration for scaling m2m (e.g. fimc/gscaler and CUDA decoder).
media-workshop mailing list media-workshop@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/media-workshop
Hi Hans,
Thank you for taking care of this.
On Monday 22 September 2014 13:42:44 Hans Verkuil wrote:
Hi all,
I have collected all suggested topics. If I missed any, then let me know.
There seem to be three groups of topics: one has general topics, one is related to the compound controls, per frame configuration and Android, and one topic is about complex video pipelines.
Note that there are currently no DVB topics at all.
I have grouped the suggestions into those three high-level topics. If you disagree with where I put it, then let me know.
Also let me know your estimate of how long you think the discussion will take. I know it can be hard to pick a time, but it tends to average out anyway.
If you put up a topic, then you are expected to prepare for it, either by making a small presentation, or at least to think about what you want to say. We have only two days and I don't want to waste any time.
To save even more time, we should try to send information to the linux-media mailing list beforehand. That could allow skipping part of the presentations of the topics, and would allow attendees to start thinking before the conference. Adding links to mail threads in mail archives to the topics listed below would be useful.
Speaking for myself: on one of the two days I have to give a lightning talk in the gstreamer conference and I want to attend Nicolas' gstreamer presentation as well. I don't know the times and dates yet.
If you have suggested a topic and you also have specific times you won't be available, then let me know as soon as you have the details.
I'll have to attend the IOMMU track at LPC on Friday afternoon (13:00 to 15:45).
The plan is to start each day at 9 am and go on to 5 pm or so, depending on how things work out.
============= General Topics ==================
Hans Verkuil:
- A presentation on colorspaces (Do *you* know what to put in the v4l2_pix_format colorspace field? And why should it matter?) 30 min
- media development process: what works, what doesn't 10-15 min
Ricardo:
- multiple timestamps per buffer
======= Per-frame configuration/camera HAL v3/Compound control types =======
Ricardo:
- multi selections and dead pixel api
Hans Verkuil:
- update on configure stores for the control framework 15 min
Sakari:
- Android camera HAL API v3 and what kind of requirements it brings to V4L2.
Pawel:
- Existing codec API ambiguities (does this belong to this topic?)
- A proposal for a new codec API extension/mode for HW codecs that can't parse elementary streams
Guennadi:
- camera support in Android (does this belong to this topic?)
=========== Complex video pipeline drivers topic ===================
Hans Verkuil:
- create virtual complex omap3-like driver: what is needed 15 min
Could you elaborate a bit on this (or point me to a mail thread that explains the topic, if there's one) ? The Xilinx V4L2 driver that I'm about to post for review might be related.
Laurent & Chris Kohn:
- runtime reconfiguration of pipelines
Philip Zabel:
Helping userspace to use mem2mem devices; clarification of encoder/decoder handling, clarification of format/size setting in case of dependencies between input and output formats, possibly broad categorisation of mem2mem devices (encoder, decoder, scaler, rotator, csc/filter, ...)
Hierarchical media devices - what if you have a lot of media entities and some of them are more closely related to each other than others.
Julien Beraud:
- Highly reconfigureable hardware devices and the possibility to create media-links and virtual subdevices in order to simplify userland vision.
Nicolas Dufresne:
- I'll like to discuss frame size enumeration for scaling m2m (e.g. fimc/gscaler and CUDA decoder).
On 09/28/2014 10:53 PM, Laurent Pinchart wrote:
Hi Hans,
Thank you for taking care of this.
On Monday 22 September 2014 13:42:44 Hans Verkuil wrote:
Hi all,
I have collected all suggested topics. If I missed any, then let me know.
There seem to be three groups of topics: one has general topics, one is related to the compound controls, per frame configuration and Android, and one topic is about complex video pipelines.
Note that there are currently no DVB topics at all.
I have grouped the suggestions into those three high-level topics. If you disagree with where I put it, then let me know.
Also let me know your estimate of how long you think the discussion will take. I know it can be hard to pick a time, but it tends to average out anyway.
If you put up a topic, then you are expected to prepare for it, either by making a small presentation, or at least to think about what you want to say. We have only two days and I don't want to waste any time.
To save even more time, we should try to send information to the linux-media mailing list beforehand. That could allow skipping part of the presentations of the topics, and would allow attendees to start thinking before the conference. Adding links to mail threads in mail archives to the topics listed below would be useful.
It certainly doesn't hurt, but you will still need a little presentation or introduction. The reality is that not everyone will have the time to read up on everything so some sort of presentation/introduction will be useful. I know it was in the past.
Speaking for myself: on one of the two days I have to give a lightning talk in the gstreamer conference and I want to attend Nicolas' gstreamer presentation as well. I don't know the times and dates yet.
If you have suggested a topic and you also have specific times you won't be available, then let me know as soon as you have the details.
I'll have to attend the IOMMU track at LPC on Friday afternoon (13:00 to 15:45).
The plan is to start each day at 9 am and go on to 5 pm or so, depending on how things work out.
============= General Topics ==================
Hans Verkuil:
- A presentation on colorspaces (Do *you* know what to put in the v4l2_pix_format colorspace field? And why should it matter?) 30 min
- media development process: what works, what doesn't 10-15 min
Ricardo:
- multiple timestamps per buffer
======= Per-frame configuration/camera HAL v3/Compound control types =======
Ricardo:
- multi selections and dead pixel api
Hans Verkuil:
- update on configure stores for the control framework 15 min
Sakari:
- Android camera HAL API v3 and what kind of requirements it brings to V4L2.
Pawel:
- Existing codec API ambiguities (does this belong to this topic?)
- A proposal for a new codec API extension/mode for HW codecs that can't parse elementary streams
Guennadi:
- camera support in Android (does this belong to this topic?)
=========== Complex video pipeline drivers topic ===================
Hans Verkuil:
- create virtual complex omap3-like driver: what is needed 15 min
Could you elaborate a bit on this (or point me to a mail thread that explains the topic, if there's one) ? The Xilinx V4L2 driver that I'm about to post for review might be related.
I want to make something similar to vivid, but emulating a complex driver with MC, subdevs, etc. All current drivers of that type all require specialized hardware, making it hard to play with. A virtual driver would make it much easier to experiment with new APIs in the context of such complex drivers. And it would provide a good skeleton source example as well.
So I am looking for input as to what sort of features should be part of such a virtual driver.
Laurent & Chris Kohn:
- runtime reconfiguration of pipelines
Laurent, can you give a time estimate for this?
Regards,
Hans
Philip Zabel:
Helping userspace to use mem2mem devices; clarification of encoder/decoder handling, clarification of format/size setting in case of dependencies between input and output formats, possibly broad categorisation of mem2mem devices (encoder, decoder, scaler, rotator, csc/filter, ...)
Hierarchical media devices - what if you have a lot of media entities and some of them are more closely related to each other than others.
Julien Beraud:
- Highly reconfigureable hardware devices and the possibility to create media-links and virtual subdevices in order to simplify userland vision.
Nicolas Dufresne:
- I'll like to discuss frame size enumeration for scaling m2m (e.g. fimc/gscaler and CUDA decoder).
Hi Hans,
On Monday 29 September 2014 09:06:38 Hans Verkuil wrote:
On 09/28/2014 10:53 PM, Laurent Pinchart wrote:
On Monday 22 September 2014 13:42:44 Hans Verkuil wrote:
Hi all,
I have collected all suggested topics. If I missed any, then let me know.
There seem to be three groups of topics: one has general topics, one is related to the compound controls, per frame configuration and Android, and one topic is about complex video pipelines.
Note that there are currently no DVB topics at all.
I have grouped the suggestions into those three high-level topics. If you disagree with where I put it, then let me know.
Also let me know your estimate of how long you think the discussion will take. I know it can be hard to pick a time, but it tends to average out anyway.
If you put up a topic, then you are expected to prepare for it, either by making a small presentation, or at least to think about what you want to say. We have only two days and I don't want to waste any time.
To save even more time, we should try to send information to the linux-media mailing list beforehand. That could allow skipping part of the presentations of the topics, and would allow attendees to start thinking before the conference. Adding links to mail threads in mail archives to the topics listed below would be useful.
It certainly doesn't hurt, but you will still need a little presentation or introduction. The reality is that not everyone will have the time to read up on everything so some sort of presentation/introduction will be useful. I know it was in the past.
I agree with you. I would still like to see information being sent beforehand when possible to start cogitating on them.
Speaking for myself: on one of the two days I have to give a lightning talk in the gstreamer conference and I want to attend Nicolas' gstreamer presentation as well. I don't know the times and dates yet.
If you have suggested a topic and you also have specific times you won't be available, then let me know as soon as you have the details.
I'll have to attend the IOMMU track at LPC on Friday afternoon (13:00 to 15:45).
The plan is to start each day at 9 am and go on to 5 pm or so, depending on how things work out.
[snip]
=========== Complex video pipeline drivers topic ===================
Hans Verkuil:
- create virtual complex omap3-like driver: what is needed 15 min
Could you elaborate a bit on this (or point me to a mail thread that explains the topic, if there's one) ? The Xilinx V4L2 driver that I'm about to post for review might be related.
I want to make something similar to vivid, but emulating a complex driver with MC, subdevs, etc. All current drivers of that type all require specialized hardware, making it hard to play with. A virtual driver would make it much easier to experiment with new APIs in the context of such complex drivers. And it would provide a good skeleton source example as well.
So I am looking for input as to what sort of features should be part of such a virtual driver.
Thank you for the clarification.
Laurent & Chris Kohn:
- runtime reconfiguration of pipelines
Laurent, can you give a time estimate for this?
I think at least 45 minutes will be needed. It depends on how much we will be able to discuss the topic on mailing lists beforehand. The problem is simple to understand but complex to solve as it will likely require framework and API adaptations in many places. I suspect that some of them could even be outside of V4L2.
Chris, what's your estimate ? Have you been able to gather use cases to be posted on the linux-media list ?
Philip Zabel:
Helping userspace to use mem2mem devices; clarification of encoder/decoder handling, clarification of format/size setting in case of dependencies between input and output formats, possibly broad categorisation of mem2mem devices (encoder, decoder, scaler, rotator, csc/filter, ...)
Hierarchical media devices - what if you have a lot of media entities and some of them are more closely related to each other than others.
Julien Beraud:
- Highly reconfigureable hardware devices and the possibility to create media-links and virtual subdevices in order to simplify userland vision.
Nicolas Dufresne:
- I'll like to discuss frame size enumeration for scaling m2m (e.g. fimc/gscaler and CUDA decoder).
Laurent,
-----Original Message----- From: Laurent Pinchart [mailto:laurent.pinchart@ideasonboard.com] Sent: Tuesday, September 30, 2014 3:40 PM To: Hans Verkuil; Chris Kohn Cc: media-workshop@linuxtv.org Subject: Re: [media-workshop] [ANN] First tentative agenda
Hi Hans,
On Monday 29 September 2014 09:06:38 Hans Verkuil wrote:
On 09/28/2014 10:53 PM, Laurent Pinchart wrote:
On Monday 22 September 2014 13:42:44 Hans Verkuil wrote:
Hi all,
I have collected all suggested topics. If I missed any, then let me know.
There seem to be three groups of topics: one has general topics, one is related to the compound controls, per frame configuration and Android, and one topic is about complex video pipelines.
Note that there are currently no DVB topics at all.
I have grouped the suggestions into those three high-level topics. If you disagree with where I put it, then let me know.
Also let me know your estimate of how long you think the discussion will take. I know it can be hard to pick a time, but it tends to average out anyway.
If you put up a topic, then you are expected to prepare for it, either by making a small presentation, or at least to think about what you want to say. We have only two days and I don't want to waste
any time.
To save even more time, we should try to send information to the linux-media mailing list beforehand. That could allow skipping part of the presentations of the topics, and would allow attendees to start thinking before the conference. Adding links to mail threads in mail archives to the topics listed below would be useful.
It certainly doesn't hurt, but you will still need a little presentation or introduction. The reality is that not everyone will have the time to read up on everything so some sort of presentation/introduction will be useful. I know it was in the past.
I agree with you. I would still like to see information being sent beforehand when possible to start cogitating on them.
Speaking for myself: on one of the two days I have to give a lightning talk in the gstreamer conference and I want to attend Nicolas' gstreamer presentation as well. I don't know the times and dates
yet.
If you have suggested a topic and you also have specific times you won't be available, then let me know as soon as you have the details.
I'll have to attend the IOMMU track at LPC on Friday afternoon (13:00 to 15:45).
The plan is to start each day at 9 am and go on to 5 pm or so, depending on how things work out.
[snip]
=========== Complex video pipeline drivers topic
Hans Verkuil:
- create virtual complex omap3-like driver: what is needed 15 min
Could you elaborate a bit on this (or point me to a mail thread that explains the topic, if there's one) ? The Xilinx V4L2 driver that I'm about to post for review might be related.
I want to make something similar to vivid, but emulating a complex driver with MC, subdevs, etc. All current drivers of that type all require specialized hardware, making it hard to play with. A virtual driver would make it much easier to experiment with new APIs in the context of such complex drivers. And it would provide a good skeleton source example as well.
So I am looking for input as to what sort of features should be part of such a virtual driver.
Thank you for the clarification.
Laurent & Chris Kohn:
- runtime reconfiguration of pipelines
Laurent, can you give a time estimate for this?
I think at least 45 minutes will be needed. It depends on how much we will be able to discuss the topic on mailing lists beforehand. The problem is simple to understand but complex to solve as it will likely require framework and API adaptations in many places. I suspect that some of them could even be outside of V4L2.
Chris, what's your estimate ? Have you been able to gather use cases to be posted on the linux-media list ?
I have a list of use cases but I haven't drawn any block diagrams or such yet. Not sure if I will have time to do it tomorrow and I'm leaving for Germany on Thursday. Might have to wait till early next week but I can send a verbal summary for a start.
I'm thinking 45 or 60 minutes maybe.
Philip Zabel:
Helping userspace to use mem2mem devices; clarification of encoder/decoder handling, clarification of format/size setting in case of dependencies between input and output formats, possibly broad categorisation of mem2mem devices (encoder, decoder, scaler, rotator, csc/filter, ...)
Hierarchical media devices - what if you have a lot of media entities and some of them are more closely related to each other than others.
Julien Beraud:
- Highly reconfigureable hardware devices and the possibility to create media-links and virtual subdevices in order to simplify userland vision.
Nicolas Dufresne:
- I'll like to discuss frame size enumeration for scaling m2m (e.g. fimc/gscaler and CUDA decoder).
Cheers, Chris
This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.
Guys, I need estimates of the time you think your topic will take.
I am planning to make the (almost) final agenda this weekend, so I need input on this, otherwise I will just take a stab at it.
I know it will be a guesstimate, but that's OK. It tends to average out.
Regards,
Hans
On 09/22/2014 01:42 PM, Hans Verkuil wrote:
Hi all,
I have collected all suggested topics. If I missed any, then let me know.
There seem to be three groups of topics: one has general topics, one is related to the compound controls, per frame configuration and Android, and one topic is about complex video pipelines.
Note that there are currently no DVB topics at all.
I have grouped the suggestions into those three high-level topics. If you disagree with where I put it, then let me know.
Also let me know your estimate of how long you think the discussion will take. I know it can be hard to pick a time, but it tends to average out anyway.
If you put up a topic, then you are expected to prepare for it, either by making a small presentation, or at least to think about what you want to say. We have only two days and I don't want to waste any time.
Speaking for myself: on one of the two days I have to give a lightning talk in the gstreamer conference and I want to attend Nicolas' gstreamer presentation as well. I don't know the times and dates yet.
If you have suggested a topic and you also have specific times you won't be available, then let me know as soon as you have the details.
The plan is to start each day at 9 am and go on to 5 pm or so, depending on how things work out.
Regards,
Hans
============= General Topics ==================
Hans Verkuil:
- A presentation on colorspaces (Do *you* know what to put in the v4l2_pix_format colorspace field? And why should it matter?) 30 min
- media development process: what works, what doesn't 10-15 min
Ricardo:
- multiple timestamps per buffer
============== Per-frame configuration/camera HAL v3/Compound control types ============
Ricardo:
- multi selections and dead pixel api
Hans Verkuil:
- update on configure stores for the control framework 15 min
Sakari:
- Android camera HAL API v3 and what kind of requirements it brings to V4L2.
Pawel:
- Existing codec API ambiguities (does this belong to this topic?)
- A proposal for a new codec API extension/mode for HW codecs that
can't parse elementary streams
Guennadi:
- camera support in Android (does this belong to this topic?)
=========== Complex video pipeline drivers topic ===================
Hans Verkuil:
- create virtual complex omap3-like driver: what is needed 15 min
Laurent & Chris Kohn:
- runtime reconfiguration of pipelines
Philip Zabel:
Helping userspace to use mem2mem devices; clarification of encoder/decoder handling, clarification of format/size setting in case of dependencies between input and output formats, possibly broad categorisation of mem2mem devices (encoder, decoder, scaler, rotator, csc/filter, ...)
Hierarchical media devices - what if you have a lot of media entities and some of them are more closely related to each other than others.
Julien Beraud: -Highly reconfigureable hardware devices and the possibility to create media-links and virtual subdevices in order to simplify userland vision.
Nicolas Dufresne:
- I'll like to discuss frame size enumeration for scaling m2m (e.g. fimc/gscaler and CUDA decoder).
media-workshop mailing list media-workshop@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/media-workshop
Hi Hans,
On Tue, 30 Sep 2014, Hans Verkuil wrote:
Guys, I need estimates of the time you think your topic will take.
I am planning to make the (almost) final agenda this weekend, so I need input on this, otherwise I will just take a stab at it.
I know it will be a guesstimate, but that's OK. It tends to average out.
I see Sakari is making a presentation on the same Android topic, so, I don't need a separate presentation, will just try to contribute to his with whatever I can.
Thanks Guennadi
Regards,
Hans
On 09/22/2014 01:42 PM, Hans Verkuil wrote:
Hi all,
I have collected all suggested topics. If I missed any, then let me know.
There seem to be three groups of topics: one has general topics, one is related to the compound controls, per frame configuration and Android, and one topic is about complex video pipelines.
Note that there are currently no DVB topics at all.
I have grouped the suggestions into those three high-level topics. If you disagree with where I put it, then let me know.
Also let me know your estimate of how long you think the discussion will take. I know it can be hard to pick a time, but it tends to average out anyway.
If you put up a topic, then you are expected to prepare for it, either by making a small presentation, or at least to think about what you want to say. We have only two days and I don't want to waste any time.
Speaking for myself: on one of the two days I have to give a lightning talk in the gstreamer conference and I want to attend Nicolas' gstreamer presentation as well. I don't know the times and dates yet.
If you have suggested a topic and you also have specific times you won't be available, then let me know as soon as you have the details.
The plan is to start each day at 9 am and go on to 5 pm or so, depending on how things work out.
Regards,
Hans
============= General Topics ==================
Hans Verkuil:
- A presentation on colorspaces (Do *you* know what to put in the v4l2_pix_format colorspace field? And why should it matter?) 30 min
- media development process: what works, what doesn't 10-15 min
Ricardo:
- multiple timestamps per buffer
============== Per-frame configuration/camera HAL v3/Compound control types ============
Ricardo:
- multi selections and dead pixel api
Hans Verkuil:
- update on configure stores for the control framework 15 min
Sakari:
- Android camera HAL API v3 and what kind of requirements it brings to V4L2.
Pawel:
- Existing codec API ambiguities (does this belong to this topic?)
- A proposal for a new codec API extension/mode for HW codecs that
can't parse elementary streams
Guennadi:
- camera support in Android (does this belong to this topic?)
=========== Complex video pipeline drivers topic ===================
Hans Verkuil:
- create virtual complex omap3-like driver: what is needed 15 min
Laurent & Chris Kohn:
- runtime reconfiguration of pipelines
Philip Zabel:
Helping userspace to use mem2mem devices; clarification of encoder/decoder handling, clarification of format/size setting in case of dependencies between input and output formats, possibly broad categorisation of mem2mem devices (encoder, decoder, scaler, rotator, csc/filter, ...)
Hierarchical media devices - what if you have a lot of media entities and some of them are more closely related to each other than others.
Julien Beraud: -Highly reconfigureable hardware devices and the possibility to create media-links and virtual subdevices in order to simplify userland vision.
Nicolas Dufresne:
- I'll like to discuss frame size enumeration for scaling m2m (e.g. fimc/gscaler and CUDA decoder).
media-workshop mailing list media-workshop@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/media-workshop
On 09/30/2014 08:57 AM, Hans Verkuil wrote:
Guys, I need estimates of the time you think your topic will take.
I am planning to make the (almost) final agenda this weekend,
Sorry, still no final agenda. I need the schedule for the gstreamer conference before I can finalize it, and that still hasn't been posted.
We're OK w.r.t. time, although both days will be full.
Regards,
Hans
so I need input on this, otherwise I will just take a stab at it.
I know it will be a guesstimate, but that's OK. It tends to average out.
Regards,
Hans
On 09/22/2014 01:42 PM, Hans Verkuil wrote:
Hi all,
I have collected all suggested topics. If I missed any, then let me know.
There seem to be three groups of topics: one has general topics, one is related to the compound controls, per frame configuration and Android, and one topic is about complex video pipelines.
Note that there are currently no DVB topics at all.
I have grouped the suggestions into those three high-level topics. If you disagree with where I put it, then let me know.
Also let me know your estimate of how long you think the discussion will take. I know it can be hard to pick a time, but it tends to average out anyway.
If you put up a topic, then you are expected to prepare for it, either by making a small presentation, or at least to think about what you want to say. We have only two days and I don't want to waste any time.
Speaking for myself: on one of the two days I have to give a lightning talk in the gstreamer conference and I want to attend Nicolas' gstreamer presentation as well. I don't know the times and dates yet.
If you have suggested a topic and you also have specific times you won't be available, then let me know as soon as you have the details.
The plan is to start each day at 9 am and go on to 5 pm or so, depending on how things work out.
Regards,
Hans
============= General Topics ==================
Hans Verkuil:
- A presentation on colorspaces (Do *you* know what to put in the v4l2_pix_format colorspace field? And why should it matter?) 30 min
- media development process: what works, what doesn't 10-15 min
Ricardo:
- multiple timestamps per buffer
============== Per-frame configuration/camera HAL v3/Compound control types ============
Ricardo:
- multi selections and dead pixel api
Hans Verkuil:
- update on configure stores for the control framework 15 min
Sakari:
- Android camera HAL API v3 and what kind of requirements it brings to V4L2.
Pawel:
- Existing codec API ambiguities (does this belong to this topic?)
- A proposal for a new codec API extension/mode for HW codecs that
can't parse elementary streams
Guennadi:
- camera support in Android (does this belong to this topic?)
=========== Complex video pipeline drivers topic ===================
Hans Verkuil:
- create virtual complex omap3-like driver: what is needed 15 min
Laurent & Chris Kohn:
- runtime reconfiguration of pipelines
Philip Zabel:
Helping userspace to use mem2mem devices; clarification of encoder/decoder handling, clarification of format/size setting in case of dependencies between input and output formats, possibly broad categorisation of mem2mem devices (encoder, decoder, scaler, rotator, csc/filter, ...)
Hierarchical media devices - what if you have a lot of media entities and some of them are more closely related to each other than others.
Julien Beraud: -Highly reconfigureable hardware devices and the possibility to create media-links and virtual subdevices in order to simplify userland vision.
Nicolas Dufresne:
- I'll like to discuss frame size enumeration for scaling m2m (e.g. fimc/gscaler and CUDA decoder).
media-workshop mailing list media-workshop@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/media-workshop
media-workshop mailing list media-workshop@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/media-workshop
Hi Hans,
Am Montag, den 22.09.2014, 13:42 +0200 schrieb Hans Verkuil: [...]
Also let me know your estimate of how long you think the discussion will take. I know it can be hard to pick a time, but it tends to average out anyway.
[...]
=========== Complex video pipeline drivers topic ===================
[...]
Philip Zabel:
- Helping userspace to use mem2mem devices; clarification of encoder/decoder handling, clarification of format/size setting in case of dependencies between input and output formats, possibly broad categorisation of mem2mem devices (encoder, decoder, scaler, rotator, csc/filter, ...)
I guess 30min should be enough for this. The 'clarification of format/size setting in case of dependencies between input and output formats' point heavily overlaps with Nicolas' 'frame size enumeration for scaling m2m' point below.
- Hierarchical media devices - what if you have a lot of media entities and some of them are more closely related to each other than others.
I have no idea about this one. That could be settled in a few minutes with some action points or spawn long discussions... Also, there might be some overlap with Julien's point:
Julien Beraud: -Highly reconfigureable hardware devices and the possibility to create media-links and virtual subdevices in order to simplify userland vision.
Nicolas Dufresne:
- I'll like to discuss frame size enumeration for scaling m2m (e.g. fimc/gscaler and CUDA decoder).
regards Philipp
Hi Hans, Thanks for taking care of this!
On Mon, Sep 22, 2014 at 8:42 PM, Hans Verkuil hverkuil@xs4all.nl wrote:
Pawel:
- Existing codec API ambiguities (does this belong to this topic?)
Not really. This is about the existing codec API (the new codec API proposal below is a variant for a slightly different type of codecs, those that can't parse the bitstream).
I plan for this to be similar to some previous sessions we've had about general ambiguities in the API: a list of issues with a decision on each (optionally with a short discussion if needed). The list is not short, but I don't want to take too much of everyone's time, and it really depends on how much discussion will be needed. Could be 1h or so perhaps... We could move this to the end of the day maybe?
Philipp's "clarification of encoder/decoder handling" sounds like it could be related, I'm not sure what exactly he'd like to discuss. My plan is to discuss the state machine, try to clarify behaviors such as what happens on streamon/streamoff, how s_fmt, crop behave, when we can/should reqbufs, etc.
I'd really like to write a solid, detailed documentation of this API as a result of this discussion, but there are a lot of ambiguities.
- A proposal for a new codec API extension/mode for HW codecs that
can't parse elementary streams
This could be 15 minutes probably for just the presentation, assuming no discussion.
Thanks, Pawel
Hi Hans, Pawel,
It turns out that I will be able to attend the media workshop.
Pawel, thank you for bringing up the ambiguities of the codec API. I am happy that I'll have the chance to join this discussion (and discuss other media topics as well :) ).
You have suggested to discuss this at the end of the day. Could it be done on Thursday, as I have a return flight around 6pm on Friday?
Best wishes and see you all in Dusseldorf,
Hello
Hans, thanks for taking care of this :)
Hello here are my estimates
--> multiple timestamps per buffer (15 min)
This should be pretty simple. I just plan to explain the problem I am having and why the timecode structure is not enough.
--> multi selections and dead pixel api (30 min)
Multiselection is just discussing how to proceed now that we have compound controls.
Dead pixel api is straightforward once that we have the compound controls, it is more about agree the best implementation.
I am planning to make a mini presentation next week and send it to the group.
Looking forward to the conference
Regards!
Hi Pawel,
Am Mittwoch, den 01.10.2014, 21:28 +0900 schrieb Pawel Osciak:
Hi Hans, Thanks for taking care of this!
On Mon, Sep 22, 2014 at 8:42 PM, Hans Verkuil hverkuil@xs4all.nl wrote:
Pawel:
- Existing codec API ambiguities (does this belong to this topic?)
Not really. This is about the existing codec API (the new codec API proposal below is a variant for a slightly different type of codecs, those that can't parse the bitstream).
I plan for this to be similar to some previous sessions we've had about general ambiguities in the API: a list of issues with a decision on each (optionally with a short discussion if needed). The list is not short, but I don't want to take too much of everyone's time, and it really depends on how much discussion will be needed. Could be 1h or so perhaps... We could move this to the end of the day maybe?
Philipp's "clarification of encoder/decoder handling" sounds like it could be related, I'm not sure what exactly he'd like to discuss. My plan is to discuss the state machine, try to clarify behaviors such as what happens on streamon/streamoff, how s_fmt, crop behave, when we can/should reqbufs, etc.
Yes, that is exactly what I had in mind. I'd like to get to a clear description in what order to correctly set up a codec for streaming and some clarification about what should happen in error conditions, such as selecting incompatible output&capture formats, trying to decode a 4:2:2 JPEG into a 4:2:0 YUV buffer, or filling P-frames at the beginning of the stream into a H.264 decoder. And then there is the issue of stream end and the EOS signal.
Also, for the selection API, I have a mem2mem scaler hardware block with a bit strange limitations on cropping and composing (for example output size is arbitrary in principle, but since scanlines are only written out in bursts of at least 8 pixels, there might be garbage right of the target rectangle).
I'd really like to write a solid, detailed documentation of this API as a result of this discussion, but there are a lot of ambiguities.
- A proposal for a new codec API extension/mode for HW codecs that
can't parse elementary streams
This could be 15 minutes probably for just the presentation, assuming no discussion.
regards Philipp
Le 2014-10-02 09:39, Philipp Zabel a écrit :
Yes, that is exactly what I had in mind. I'd like to get to a clear description in what order to correctly set up a codec for streaming and some clarification about what should happen in error conditions, such as selecting incompatible output&capture formats, trying to decode a 4:2:2 JPEG into a 4:2:0 YUV buffer, or filling P-frames at the beginning of the stream into a H.264 decoder. And then there is the issue of stream end and the EOS signal.
Ah, now I get why you wanted to split the JPEG decoder from the others. I think it relates to the topic I wanted to bring. We basically need a mechanism to enumerate format after the output device format has been set. Otherwise, we need to do a lot of trial and error if we try to negotiate an capture format.
Nicolas
Hi Nicolas,
Am Donnerstag, den 02.10.2014, 09:44 -0400 schrieb Nicolas Dufresne:
Le 2014-10-02 09:39, Philipp Zabel a écrit :
Yes, that is exactly what I had in mind. I'd like to get to a clear description in what order to correctly set up a codec for streaming and some clarification about what should happen in error conditions, such as selecting incompatible output&capture formats, trying to decode a 4:2:2 JPEG into a 4:2:0 YUV buffer, or filling P-frames at the beginning of the stream into a H.264 decoder. And then there is the issue of stream end and the EOS signal.
Ah, now I get why you wanted to split the JPEG decoder from the others.
Yes, that is one of the reasons. The other being that the CODA960 has a separately controlled hardware unit that does not use the same bitstream buffer mechanism as the other codecs.
I think it relates to the topic I wanted to bring. We basically need a mechanism to enumerate format after the output device format has been set. Otherwise, we need to do a lot of trial and error if we try to negotiate an capture format.
The JPEG decoding issue is something that can't be statically enumerated at all. We don't know what the JPEG's chroma subsampling is until after STREAMON, when the driver could peek into the first buffer's header.
We might be able to use ENUM_FMT and ENUM_FRAMESIZES on the capture side if we decide that they change their meaning after STREAMON on the output side and only list formats that the codec can generate from the already queued frames.
regards Philipp
Philipp, Kamil (and potentially others, everyone interested would be welcome):
It almost sounds like we should have a workshop of our own just for codecs (which is a great thing, codec API needs to polished and documented better).
How about this: I'm already meeting with Hans to talk about this on Wednesday, if you guys would be available on Wednesday, perhaps we could all get together and spend some time on this, organize our lists, and try to iron out everything we could in a smaller quorum among us, and then present a summary at the workshop, including also discussion points that would require attention/opinion of everyone?
(And once we have everything, I'll update V4L documentation with our conclusions after the workshop).
Thanks, Pawel
On Thu, Oct 2, 2014 at 10:39 PM, Philipp Zabel p.zabel@pengutronix.de wrote:
Hi Pawel,
Am Mittwoch, den 01.10.2014, 21:28 +0900 schrieb Pawel Osciak:
Hi Hans, Thanks for taking care of this!
On Mon, Sep 22, 2014 at 8:42 PM, Hans Verkuil hverkuil@xs4all.nl wrote:
Pawel:
- Existing codec API ambiguities (does this belong to this topic?)
Not really. This is about the existing codec API (the new codec API proposal below is a variant for a slightly different type of codecs, those that can't parse the bitstream).
I plan for this to be similar to some previous sessions we've had about general ambiguities in the API: a list of issues with a decision on each (optionally with a short discussion if needed). The list is not short, but I don't want to take too much of everyone's time, and it really depends on how much discussion will be needed. Could be 1h or so perhaps... We could move this to the end of the day maybe?
Philipp's "clarification of encoder/decoder handling" sounds like it could be related, I'm not sure what exactly he'd like to discuss. My plan is to discuss the state machine, try to clarify behaviors such as what happens on streamon/streamoff, how s_fmt, crop behave, when we can/should reqbufs, etc.
Yes, that is exactly what I had in mind. I'd like to get to a clear description in what order to correctly set up a codec for streaming and some clarification about what should happen in error conditions, such as selecting incompatible output&capture formats, trying to decode a 4:2:2 JPEG into a 4:2:0 YUV buffer, or filling P-frames at the beginning of the stream into a H.264 decoder. And then there is the issue of stream end and the EOS signal.
Also, for the selection API, I have a mem2mem scaler hardware block with a bit strange limitations on cropping and composing (for example output size is arbitrary in principle, but since scanlines are only written out in bursts of at least 8 pixels, there might be garbage right of the target rectangle).
I'd really like to write a solid, detailed documentation of this API as a result of this discussion, but there are a lot of ambiguities.
- A proposal for a new codec API extension/mode for HW codecs that
can't parse elementary streams
This could be 15 minutes probably for just the presentation, assuming no discussion.
regards Philipp
Hi Pawel,
On Friday 03 October 2014 10:34:42 Pawel Osciak wrote:
Philipp, Kamil (and potentially others, everyone interested would be welcome):
It almost sounds like we should have a workshop of our own just for codecs (which is a great thing, codec API needs to polished and documented better).
How about this: I'm already meeting with Hans to talk about this on Wednesday, if you guys would be available on Wednesday, perhaps we could all get together and spend some time on this, organize our lists, and try to iron out everything we could in a smaller quorum among us, and then present a summary at the workshop, including also discussion points that would require attention/opinion of everyone?
(And once we have everything, I'll update V4L documentation with our conclusions after the workshop).
Could you please publish the time and place for Wednesday after you've agreed on them ? I'll then try to join if I can.
Hi Laurent, Pawel,
On 10/06/2014 12:21 AM, Laurent Pinchart wrote:
Hi Pawel,
On Friday 03 October 2014 10:34:42 Pawel Osciak wrote:
Philipp, Kamil (and potentially others, everyone interested would be welcome):
It almost sounds like we should have a workshop of our own just for codecs (which is a great thing, codec API needs to polished and documented better).
How about this: I'm already meeting with Hans to talk about this on Wednesday, if you guys would be available on Wednesday, perhaps we could all get together and spend some time on this, organize our lists, and try to iron out everything we could in a smaller quorum among us, and then present a summary at the workshop, including also discussion points that would require attention/opinion of everyone?
(And once we have everything, I'll update V4L documentation with our conclusions after the workshop).
Could you please publish the time and place for Wednesday after you've agreed on them ? I'll then try to join if I can.
We have a table for 6 next to the bar at the Radisson hotel with powersocket availability for laptops. The plan is to start discussing the codec issues at 9 am for those who are interested in this pre-mini-summit discussion.
Regards,
Hans
On Tue, Oct 14, 2014 at 4:22 AM, Hans Verkuil hverkuil@xs4all.nl wrote:
Hi Laurent, Pawel,
On 10/06/2014 12:21 AM, Laurent Pinchart wrote:
Hi Pawel,
On Friday 03 October 2014 10:34:42 Pawel Osciak wrote:
Philipp, Kamil (and potentially others, everyone interested would be welcome):
It almost sounds like we should have a workshop of our own just for codecs (which is a great thing, codec API needs to polished and documented better).
How about this: I'm already meeting with Hans to talk about this on Wednesday, if you guys would be available on Wednesday, perhaps we could all get together and spend some time on this, organize our lists, and try to iron out everything we could in a smaller quorum among us, and then present a summary at the workshop, including also discussion points that would require attention/opinion of everyone?
(And once we have everything, I'll update V4L documentation with our conclusions after the workshop).
Could you please publish the time and place for Wednesday after you've agreed on them ? I'll then try to join if I can.
We have a table for 6 next to the bar at the Radisson hotel with powersocket availability for laptops. The plan is to start discussing the codec issues at 9 am for those who are interested in this pre-mini-summit discussion.
Thanks Hans, see you there!
Le 2014-10-13 21:22, Hans Verkuil a écrit :
We have a table for 6 next to the bar at the Radisson hotel with powersocket availability for laptops. The plan is to start discussing the codec issues at 9 am for those who are interested in this pre-mini-summit discussion.
I'll be there,
Nicolas
Am Freitag, den 03.10.2014, 10:34 +0900 schrieb Pawel Osciak:
Philipp, Kamil (and potentially others, everyone interested would be welcome):
It almost sounds like we should have a workshop of our own just for codecs (which is a great thing, codec API needs to polished and documented better).
How about this: I'm already meeting with Hans to talk about this on Wednesday, if you guys would be available on Wednesday, perhaps we could all get together and spend some time on this, organize our lists, and try to iron out everything we could in a smaller quorum among us, and then present a summary at the workshop, including also discussion points that would require attention/opinion of everyone?
(And once we have everything, I'll update V4L documentation with our conclusions after the workshop).
Yes, that would be great. On Wednesday I'd prefer meeting in the afternoon.
regards Philipp