Hi Hans,
On Monday 29 September 2014 09:06:38 Hans Verkuil wrote:
On 09/28/2014 10:53 PM, Laurent Pinchart wrote:
On Monday 22 September 2014 13:42:44 Hans Verkuil wrote:
Hi all,
I have collected all suggested topics. If I missed any, then let me know.
There seem to be three groups of topics: one has general topics, one is related to the compound controls, per frame configuration and Android, and one topic is about complex video pipelines.
Note that there are currently no DVB topics at all.
I have grouped the suggestions into those three high-level topics. If you disagree with where I put it, then let me know.
Also let me know your estimate of how long you think the discussion will take. I know it can be hard to pick a time, but it tends to average out anyway.
If you put up a topic, then you are expected to prepare for it, either by making a small presentation, or at least to think about what you want to say. We have only two days and I don't want to waste any time.
To save even more time, we should try to send information to the linux-media mailing list beforehand. That could allow skipping part of the presentations of the topics, and would allow attendees to start thinking before the conference. Adding links to mail threads in mail archives to the topics listed below would be useful.
It certainly doesn't hurt, but you will still need a little presentation or introduction. The reality is that not everyone will have the time to read up on everything so some sort of presentation/introduction will be useful. I know it was in the past.
I agree with you. I would still like to see information being sent beforehand when possible to start cogitating on them.
Speaking for myself: on one of the two days I have to give a lightning talk in the gstreamer conference and I want to attend Nicolas' gstreamer presentation as well. I don't know the times and dates yet.
If you have suggested a topic and you also have specific times you won't be available, then let me know as soon as you have the details.
I'll have to attend the IOMMU track at LPC on Friday afternoon (13:00 to 15:45).
The plan is to start each day at 9 am and go on to 5 pm or so, depending on how things work out.
[snip]
=========== Complex video pipeline drivers topic ===================
Hans Verkuil:
- create virtual complex omap3-like driver: what is needed 15 min
Could you elaborate a bit on this (or point me to a mail thread that explains the topic, if there's one) ? The Xilinx V4L2 driver that I'm about to post for review might be related.
I want to make something similar to vivid, but emulating a complex driver with MC, subdevs, etc. All current drivers of that type all require specialized hardware, making it hard to play with. A virtual driver would make it much easier to experiment with new APIs in the context of such complex drivers. And it would provide a good skeleton source example as well.
So I am looking for input as to what sort of features should be part of such a virtual driver.
Thank you for the clarification.
Laurent & Chris Kohn:
- runtime reconfiguration of pipelines
Laurent, can you give a time estimate for this?
I think at least 45 minutes will be needed. It depends on how much we will be able to discuss the topic on mailing lists beforehand. The problem is simple to understand but complex to solve as it will likely require framework and API adaptations in many places. I suspect that some of them could even be outside of V4L2.
Chris, what's your estimate ? Have you been able to gather use cases to be posted on the linux-media list ?
Philip Zabel:
Helping userspace to use mem2mem devices; clarification of encoder/decoder handling, clarification of format/size setting in case of dependencies between input and output formats, possibly broad categorisation of mem2mem devices (encoder, decoder, scaler, rotator, csc/filter, ...)
Hierarchical media devices - what if you have a lot of media entities and some of them are more closely related to each other than others.
Julien Beraud:
- Highly reconfigureable hardware devices and the possibility to create media-links and virtual subdevices in order to simplify userland vision.
Nicolas Dufresne:
- I'll like to discuss frame size enumeration for scaling m2m (e.g. fimc/gscaler and CUDA decoder).