<!-- Some styling for better description lists --><style type='text/css'>dt { font-weight: bold;float: left;display:inline;margin-right: 1em} dd { display:block; margin-left: 2em}</style>

   ***: Tex_Nick has left "In Linux, We Trust"
   hverkuil: <u>pinchartl</u>: regarding the discussion on histograms: I would make a new V4L2_BUF_TYPE_ for that and a new struct for inside the fmt union. Something similar to struct v4l2_sdr_format (i.e., just a buffersize field) would work much better than a width/height.
   <br> Whether a video node should be used for this or a node with a different name I am not certain about.
   <br> It could be argued that a new buf type (and associated QUERYCAP capabilities) might be enough to differentiate it from regular video.
   <br> Could this be considered meta data? I.e. V4L2_BUF_TYPE_VIDEO_META_CAPTURE?
   pinchartl: <u>hverkuil</u>: good point. I should have thought about it
   <br> and good question regarding meta-data
   <br> it could be, in a way
   <br> <u>sailus</u>: any opinion ?
   sailus: Huomenta!
   pinchartl: bonjour
   sailus: Let me think a bit.
   <br> It's too early...
   hverkuil: We also want to support meta data as part of a multiplanar format, so being able to reuse the pixelformat for that would be useful (unrelated to the buf_type discussion, but worth keeping in mind).
   sailus: <u>hverkuil</u>: Plane specific formats might be nice in some cases.
   <br> Albeit I have to say that in camera related use cases, the chunks of data (image, meta) typically arrive at different points of time and it's essential to pass them to the user space as soon as they're available.
   <br> So multi-plane buffers aren't so helpful there.
   hverkuil: I know, but we should keep the option.
   sailus: Some devices are used in cameras and other purposes.
   <br> It'd be painful to support both in drivers.
   <br> Well, we don't need to decide that now.
   pinchartl: aren't formats per-buffer instead of per-plane ?
   sailus: Currently they are.
   pinchartl: and buffer types as well ?
   sailus: Yes...
   pinchartl: how would that work then ?
   sailus: The API would need to be extended.
   pinchartl: with multiplanar buffers I mean
   sailus: I'm not sure it's a best option to put statistics related configuration to struct v4l2_format.fmt union.
   <br> Typically a lot that is very hardware specific.
   pinchartl: I'm not saying we should
   <br> the question is whether to introduce V4L2_BUF_TYPE_VIDEO_META_CAPTURE?
   <br> or V4L2_BUF_TYPE_VIDEO_STATISTICS
   sailus: And if the hardware supports several modes, a menu control to select one might be nice.
   <br> I think I wouldn't create new buffer types.
   pinchartl: can you decide on that with hverkuil ? :-)
   sailus: I'd like to. :-)
   <br> For instance, certain piece of hardware I know has just a bunch of DMA engines, but they can be used for various purposes.
   <br> The CSI-2 receiver hardware generally has no interest towards the type of the data.
   <br> It could be metadata, or it could be image data.
   <br> And video buffer queues have a single type only.
   <br> As there's a single video buffer queue per DMA engine, this might become problematic at some point.
   pinchartl: good point
   <br> although the type could be selected by REQBUFS I suppose
   hverkuil: Each device node can handle only a single buf_type today (with the exception of sliced and raw VBI)
   sailus: At least that would require changing how VB2 works.
   hverkuil: We can make another exception for video and meta data, i.e. that both are supported (although only one at a time).
   sailus: Something I've thought btw.: it'd be nice to make it possible to access single planar queues using the multi-planar API. This is certainly not a top priority though.
   hverkuil: The reason I think a new buf type would be useful is that struct v4l2_pix_format really isn't a good match for meta data.
   sailus: That's certainly a valid point as well.
   <br> But should we have generic configuration for "statistics", what would you find in the corresponding struct?
   <br> The statistics generation is typically quite hardware specific.
   hverkuil: meta data is a just a buffer, so all it would contain is a pixelformat and a buffersize (or buffersizes, can meta data be multiplanar?)
   sailus: There are no commonly supported "formats" such as there is for image data.
   <br> Yeah, that's my thinking as well.
   pinchartl: <u>sailus</u>: it's more about what you wouldn't find I suppose. width, height, bytesperline, colorspace, field, none of them make sense for statistics data
   hverkuil: Similar to v4l2_sdr_format.
   <br> pixelformat for meta data would typically be device specific.
   sailus: It might not be that much work to extend the queues to handle multiple pre-defined types btw.
   <br> I haven't analysed that thoroughly I have to say.
   pinchartl: I might not be awake enough to see a use case for multiplanar metadata or statistics
   hverkuil: <u>sailus</u>: sorry, what do you mean with "multiple pre-defined types"?
   sailus: I'm not proposing them either.
   <br> I mean that a single buffer queue could be used with multiple buffer types.
   <br> Say, CAPTURE and CAPTURE_META, if we add one.
   hverkuil: Not at the same time, I hope :-)
   sailus: No. :_)
   <br> The type is buffer specific and will be passed to VIDIOC_STREAMON as well.
   <br> So the queued buffers have to match with that type.
   <br> Or the type could be fixed at REQBUFS as pinchartl suggested.
   <br> CREATE_BUFS has to be handled as well.
   hverkuil: But yes, that should be possible. Fixed at REQBUFS/CREATE_BUFS, definitely.
   <br> In fact, it is already possible. vivid does that when deciding whether the queue is for sliced or raw VBI.
   <br> Basically when S_FMT is called the type of the corresponding queue is changed to the right BUF_TYPE for that format.
   <br> Nothing else was needed.
   <br> See drivers/media/platform/vivid/vivid-vbi-cap.c, vidioc_s_fmt_sliced_vbi_cap and vidioc_s_fmt_vbi_cap.
   sailus: I suppose that's not using videobuf2?
   hverkuil: That's with vb2.
   <br> The queue type is internal, drivers can just set it to whatever they want.
   sailus: Ok.
   <br> That's good then.
   hverkuil: I think a new buf_type and format struct would map well for meta data.
   <br> with minimal effort (always nice!)
   pinchartl: I can try that
   <br> V4L2_BUF_TYPE_VIDEO_META_CAPTURE to cover all kind of metadata, including statistics ?
   hverkuil: y
   <br> with a corresponding V4L2_CAP_VIDEO_META_CAPTURE capability.
   <br> video nodes can set this cap together with V4L2_CAP_VIDEO_CAPTURE(_MPLANE) in the device_caps field if the video node can support both.
   <br> In that case S_FMT will determine whether the video node is used to capture video or video meta data.
   sailus: <u>hverkuil</u>: I'm not sure there will be use cases for this, but please do consider that there could be a use case for having different kinds of buffers allocated at the same time.
   <br> Queued buffers still need to match what's used for streaming, that's the minimum requirement.
   <br> I suppose that could be changed later on as well, as it's relaxing the requirements placed on the application. The other way is harder.
   hverkuil: 'different kinds of buffers'? You mean buffer with different sizes?
   sailus: Statistics and non-statistics buffers, for instance.
   <br> The queue is still bound to a DMA engine which can be used for both purposes.
   hverkuil: Sure, but only one purpose at a time, right?
   sailus: Yes. During streaming a single type can be used only.
   hverkuil: OK, so that's no problem.
   pinchartl: so this all means more documentation work for me... :-)
   sailus: What I mean it might be useful to be able to use CREATE_BUFS to create metadata and non-metadata buffers in the same queue.
   pinchartl: speaking of which
   sailus: <u>pinchartl</u>: Is it more documentation? :-)
   <br> I think the documentation is rather in a different place.
   pinchartl: would http://www.ideasonboard.org/V4L2-STATS-FMT-VSP1-HGO.html look like a reasonable amount of documentation for a histogram format ?
   sailus: In streamon, and not in reqbufs + create_bufs.
   hverkuil: <u>sailus</u>: all buffers have to be of the same buf_type, so I don't see how you can do that.
   sailus: Well, you mean in VB2 currently?
   <br> It wouldn't be a big change.
   <br> But tt could be left for later as well.
   <br> We can always change the spec to allow what wasn't allowed before.
   hverkuil: <u>pinchartl</u>: that documentation looks OK to me.
   pinchartl: <u>hverkuil</u>: thank you
   <br> the kernel docbook stylesheet doesn't allow me to format the table as I would like :-/
   sailus: Would it be possible to have borders in the table cells? It might be cleaner that way.
   pinchartl: looks like we can only have either no border at all or borders around all cells
   <br> both look ugly :-/
   sailus: How does it look like around every cell?
   larsc: where do you want to have borders?
   sailus: The cells have highly variable number of content. It'd probably be clearer what
   <br> 's in which cell if there were borders.
   <br> I'm fine without them though.
   nohous: nut
   <br> <u>hverkuil</u>: can the pixel format change as result of settings dv timings?
   <br> <u>hverkuil</u>: ping
   hverkuil: <u>nohous</u>: The spec is silent on the topic. Today changing dv timings will not change the pixelformat, but it may change things like the colorspace.
   nohous: <u>hverkuil</u>: asking since still thinking about the approach to choose considering timing differences with different source pixel formats
   <br> the big question is whether timing info shouldn't actually contain something like bits / pixel clock period
   hverkuil: I think pixelformats can change, even with HDMI. Particularly with YUV 4:4:4/4:2:2/4:2:0 you may end up with different pixelformats depending on the capabilities of the DMA engine.
   nohous: allright
   <br> by the way, is there any wide-spread user-space software that actually support setting the dv_timings?
   hverkuil: v4l2-ctl, qv4l2 and possibly gstreamer. I know a lot of work went into gstreamer, I just can't remember if this was done as well.
   <br> It's not all that common because there is no kernel support for any of the commonly available HDMI PCI capture cards.
   <br> All kernel HDMI capture support is on embedded systems, and the software for those tends to be proprietary.
   nohous: and what is then the correct order for calling s_timings and s_fmt? :-)
   <br> (from user space)
   hverkuil: the correct order is:
   <br> 1) S_INPUT
   <br> 2) S_DV_TIMINGS/S_STD
   <br> 3) S_FMT
   nohous: <u>hverkuil</u>: ok, i guess this is specified somewhere, sorry
   <br> <u>hverkuil</u>: ok, i give up, i can't find it. Is this order somehow implied or is there an explicit statement somewhere?
   hverkuil: I don't think it is explicitly stated anywhere.
   <br> Although VIDIOC_S_INPUT does state that the standard can change.
   <br> The fact that S_DV_TIMINGS/S_STD can change the format is not explicitly stated, but since changing this will change resolution as well it is sort of obvious.
   nohous: ok, i'll try to be better on inferring the obvious ;-)
   <br> btw, just to see that we are moving in right direction: http://maniny46.cz/~nohous/bastly/sdigrab1.jpg
   <br> that's hd-sdi from camera running to fullscreened mplayer
   <br> (the hw is sort of dev board)
   hverkuil: Now, that's a proper desk! Nothing good can ever come from a clean desk :-)
   nohous: that cable mess is mostly a result of lacking docking station for my laptom :-)
   ***: dannas has left
   mchehab: pinchartl, hverkuil, sailus: for sure a new buf type is required when adding stats/metadata. I assumed on our discussions yesterday that you were planning to do that, when you mentioned SDR
   <br> I'm not sure, however, if a _MPLANE for meta-data is needed
   pinchartl: <u>mchehab</u>: I'm not sure either. I'll leave it out to start with
   mchehab: what would be the usecase for sending multiple meta datas at the same time?
   <br> if we don't have any usecase, let's make it simpler, and add just a single-plane buf
   <br> I still think that having a different name for the stats devnode is more coherent to what we've done so far for vbi/raw_vbi/swradio/radio
   pinchartl: v4l-meta ?
   mchehab: basically, when the output (or input) data is not video, we use a different name
   <br> v4l is likely overkill, I guess...
   larsc: /dev/meta?
   pinchartl: meta without context is quite generic
   hverkuil: /dev/meta is too generic.
   mchehab: yes
   hverkuil: v4l-meta would work for me.
   pinchartl: we have v4l-subdev already
   mchehab: ok, works for me
   hverkuil: or perhaps just v4l-data
   pinchartl: <u>hverkuil</u>: I was thinking about that too
   hverkuil: I like that better, actually.
   pinchartl: if we had to redo it today, I'd use v4l-data to transfer video data and v4l-ctrl for configuration (controls + formats)
   hverkuil: But then you can't reuse the same node for video and data. Something sailus wanted.
   mchehab: <u>hverkuil</u>: we use radio and video devnodes already for just one DMA engine
   <br> (and swradio)
   pinchartl: <u>hverkuil</u>: technically speaking, nothing would prevent using the V4L2 video API on a VFL_TYPE_DATA devnode
   mchehab: <u>pinchartl</u>: actaully, the core blocks abusing it
   <br> we had this issue in the past...
   <br> where the API would allow receiving radio via /dev/video
   <br> and receiving video via /dev/radio
   pinchartl: yes, but it's a new device type, so we're free to do what we want
   hverkuil: The original V4L2 spec allowed that, but it was extremely messy. Few, if any, drivers supported that.
   mchehab: <u>pinchartl</u>: the problem arises when userspace apps would try to start both buffer modes at the same time via different nodes
   <br> things get messy really quick
   <br> and makes harder for the driver to follow the V4L2 spec
   <br> hverkuil, pinchartl: I don't like "v4l-data"...
   <br> this can be confusing... as a video stream, or a sdr stream is also data
   pinchartl: we have a proliferation of device nodes that will likely require some major API refactoring at some point
   mchehab: I guess v4l-meta is better
   pinchartl: it would make sense to use v4l-data for data transfer and v4l-ctrl for control, a bit like in ALSA or DRM/KMS
   <br> not sure if we need to address that now though
   <br> I'll start with v4l-meta for the initial implementation
   sailus: <u>mchehab</u>: The DMA engines generally are not aware of what kind of data they transfer.
   <br> A single queue could be used for video data and later for metadata.
   <br> For that reason I'd prefer to keep the device node names unaware of which purpose the queue is used for at any given point of time.
   mchehab: <u>sailus</u>: on V4L/V4L2, a devnode is associated with a usage, and not with a dma engine
   <br> the same DMA engine is thus mapped with different devnodes, depending on its usage type
   pinchartl: <u>mchehab</u>: that's true today, but hardware is moving away from that model
   <br> I believe we'll have to adapt the API to take that trend into consideration at some point
   mchehab: this is not a hardware model, it is a Linux API design
   pinchartl: again, it might not be needed right now
   <br> Linux APIs are based on the requirements of the devices
   mchehab: decided back in 1997, when V4L was introduced
   pinchartl: or at least need to take them into account
   <br> devices are pretty different nowadays that they were 20 years ago :-)
   <br> so I believe we'll have to reconsider those design decisions at some point
   <br> that might become V4L3 ;-)
   sailus: <u>mchehab</u>: pinchartl might not need it now but I think I will soon.
   <br> What would you do, have the same video buffer queue accessible through two device nodes?
   mchehab: we might do that some day, but this will be a very big change...
   <br> specially because not all drivers use the core frameworks
   sailus: Indeed.
   <br> The devices we support now are very different from those ten years ago.
   mchehab: <u>sailus</u>: a video buffer queue is specific to a buffer type
   sailus: And the new ones are increasingly different from the 2006 devices.
   mchehab: if you have more than one buffer type, you have more than one queue
   sailus: <u>mchehab</u>: hverkuil pointed out the queue type can be changed by the driver.
   mchehab: yes, by calling create_bufs or reqbufs again
   <br> destroying the old queue and creating a new one
   <br> you can't re-use, because sizes are format-dependent
   sailus: I don't think there's a need to re-create the queue.
   <br> The video buffer queues have no formats.
   <br> Buffers do.
   pinchartl: but even if we recreate the queue, we have the problem of the device node type
   <br> V4L2_BUF_TYPE_* != VFL_TYPE_*
   mchehab: <u>pinchartl</u>: it is different because multiple buffer types can be uses on the same devnode
   <br> /dev/video supports non-planar and planar types
   <br> /dev/vbi supports raw VBI and sliced VBI types
   sailus: Um, true. For what matters in practice though, metadata is VFL_TYPE_GRABBER.
   ***: awalls has left
   mchehab: no, it would be a VFL_TYPE_METADATA
   <br> Documentation/video4linux/v4l2-framework.txt:VFL_TYPE_GRABBER: videoX for video input/output devices
   <br> Documentation/video4linux/v4l2-framework.txt:VFL_TYPE_VBI: vbiX for vertical blank data (i.e. closed captions, teletext)
   <br> Documentation/video4linux/v4l2-framework.txt:VFL_TYPE_RADIO: radioX for radio tuners
   <br> Documentation/video4linux/v4l2-framework.txt:VFL_TYPE_SDR: swradioX for Software Defined Radio tuners
   sailus: So you'd create a link from each DMA engine to two device nodes, each of which has a buffer queue related to the same DMA engine? The APIs exposed by the device nodes are largely the same.
   mchehab: <u>sailus</u>: that's what happens already
   sailus: Do you have an example?
   mchehab: on almost all devices, VFL_TYPE_GRABBER, VFL_TYPE_RADIO and VFL_TYPE_SDR corresponds to the same DMA engine
   <br> the DMA engine is either streaming video or radio or SDR
   sailus: Radio or SDR does not use video buffers.
   <br> Statistics or metadata does.
   mchehab: radio doesn't use buffers (except on one driver - pvrusb2)...
   sailus: Oh, SDR does?
   mchehab: sdr does use
   sailus: I don't think there's a driver that would use both SDR and GRABBER.
   mchehab: right now, there isn't...
   sailus: Well. I think it's doable, there will just be a lot of more device nodes.
   mchehab: I started working on adding SDR support for cx88, but didn't have time to finish
   sailus: And device nodes isn't what we're exactly short of already.
   <br> It's possible to use only either at a time anyway.
   <br> Let's see how the implementation is.
   <br> Having a separate buffer and VFL types for metadata (including statistics) is better in line with what already exists.
   <br> Still this kind of design accelerates the need for V4L3.
   <br> Which is not necessarily a bad thing as such.
   pinchartl: <u>sailus</u>: I agree
   <br> I'll go for VFL_TYPE_METADATA then
   mchehab: as I said before, a V4L3 API would require *a lot* of work at drivers level to abstract the V4L2 API in a way that would allow the same driver to be called either via V4L2 or V4L3...
   <br> and people with lots of hardware and spare time to convert the existing drivers
   sailus: <u>mchehab</u>: What is spare time?
   pinchartl: <u>mchehab</u>: don't take me wrong, I don't consider such a move lightly, but I think it will be unavoidable one day
   mchehab: I've no idea... I don't have spare time for years
   <br> <u>pinchartl</u>: are you stepping up to convert all existing drivers to a new API?
   pinchartl: that's why I think it would be beneficial to start considering it in the near future to take it into account in our development decisions for the next couple of years and be prepared
   <br> compared to having to switch suddenly
   <br> I wish I could :-)
   <br> I don't want to convert anything now
   mchehab: I did lots of V4L -&gt; V4L2 conversions in the past...
   pinchartl: neither myself, nor push others to do it
   sailus: I think we could start with one or two drivers which are in most of the need of the new API, and being converting the rest when they're working well.
   mchehab: I won't do any such conversion in the future
   pinchartl: <u>sailus</u>: I don't think we should go that way now
   mchehab: as it was not good times
   sailus: The risk is that many drivers would continue using the old APIs for a long time.
   pinchartl: to start with, there's no such thing as V4L3 today, so nothing to convert to
   sailus: The key is that the conversion should be straighforward.
   mchehab: <u>sailus</u>: I'll only agree with a V4L3 if we have a team of voluntees to work to port all existing drivers to it
   pinchartl: what I'm interested in is finding out how our framework and APIs should evolve in the long term
   <br> and make sure that the incremental development we do today goes in that direction
   <br> it's not about conversion, it's about minimizing the need for conversion
   sailus: <u>pinchartl</u>: Agreed.
   pinchartl: it's really about the long term strategy plan
   mchehab: <u>pinchartl</u>: the problem is that there will always be border cases...
   pinchartl: <u>mchehab</u>: there will always be "creative" hardware engineers :-)
   mchehab: pvrusb2 with radio and audio via /dev/video and /dev/radio is one of such examples
   <br> uvcvideo that doesn't use control framework
   <br> and all VB1 drivers that we still have
   <br> the point is: a conversion from V4L2 to anything else:
   <br> - won't improve anything for existing drivers;
   <br> - would require lots of efforts;
   <br> - would require apps to be re-designed.
   <br> - would cause regressions
   <br> it is very hard to convince someone to do that, specially as no gain will be obtained for the existing drivers
   <br> even when we did the conversion from V4L to V4L2, where there were some improvements, it was hard enough to do it
   hverkuil: I have zero interest in a V4L3 at this moment. It is much better to concentrate efforts into good internal frameworks.
   <br> That will also make it much easier to switch to a V4L3 if we ever need that.
   mchehab: (V4L were really crap with regards to video standards support - and there were several hacks on the drivers to work for some video standards)
   pinchartl: we'll see what the future brings, I can't foresee everything
   hverkuil: Also never forget that the vast majority of V4L2 drivers in the kernel work perfectly fine with V4L2.
   pinchartl: (we still have 10 vb1 drivers, that's impressive)
   <br> 9, sorry
   <br> or possibly a few more using soc-camera
   <br> 11 actually
   mchehab: <u>pinchartl</u>: the point is: converting a driver to use VB2 brings not much gain, but requires lots of efforts and tests
   <br> and access to legacy hardware
   <br> that's why V4L-&gt;V4L2 conversion took a lot of years to happen, and VB1-&gt;VB2 is still not finished
   sailus: I believe we should start working on V4L3 when it's becoming apparent we need to solve existing problems that cannot be solved while retaining compatibility with V4L2.
   <br> Then we can see what's needed to convert the rest of the drivers when we have one or two driver using the new APIs.
   <br> This is all speculation before we're there.
   mchehab: <u>sailus</u>: yes, but, at that point, I'll require a team of people that will be willing to do the conversion
   <br> and probably won't merge V4L3 patches without such conversion, to avoid the mess we had with V4L-&gt;V4L2 conversion
   ***: benjiG has left
   courrier: Hey guys, I'm getting a "VIDIOC_STREAMON error 28, No space left on device" when opening two HD cams at the same time, knowing that:
   <br> 1) They are plugged on different USB buses, see my lsusb: http://paste.debian.net/432369/
   <br> 2) "modprobe uvcvideo quirks=128" seems to execute properly but does not fix the issue
   <br> 3) I'm already opening the stream in MJPEG via openCV
   <br> Any other tip? :)
   <br> Also I mention that I can read one camera successfully in HD, this only happens if I open both at the same time
   ***: Tex_Nick has left "In Linux, We Trust"
   <br> javier__ has quit IRC (Quit: leaving)
   courrier: I have to go but stay connected, if you have any clue please keep me posted :)
   ***: pmmd has quit IRC (Quit: No Ping reply in 180 seconds.)
   <br> awalls has left
   sailus: <u>mchehab</u>: Ping?
   mchehab: <u>sailus</u>: pong
   sailus: <u>mchehab</u>: Quick note about the vb2 plane validation fixes.
   <br> Tt'd be nice to get them to v4.6.
   <br> s/T/I/
   mchehab: sure
   <br> I didn't have time to handle patches yet...
   <br> lots of other things to do after returning from long trips
   sailus: No worries.
   <br> I just wanted to make sure they're not forgotten. :-)
   mchehab: did you send a pull request with [PULL FIXES]?
   <br> if so, it will get higher priority for me when handling it
   sailus: Yes.
   <br> I sent one against master as well, please ignore it.
   <br> It was late.
   mchehab: ok
   <br> feel free to mark the second pull request as superseded at patchwork
   sailus: It's "[GIT FIXES"... I hope that works as well.
   ***: teemu_ has quit IRC (Ping timeout: 264 seconds)
   pinchartl: speaking of pull requests, I've sent one for the VSP1 driver that includes the entity obj_type patches that have been discussed in the version that was agreed upon on the mailing list
   <br> I'd appreciate if we could get that in v4.7
   mchehab: sure, if the patches are ok, I'll be handing it very likely this week
   pinchartl: thanks
   <br> I hope they're fine :-)
   sailus: Marked the obsolete one as superseded.
   pinchartl: it's mostly boring driver churn
   sailus: And it's obviously for v4.6, not v4.7...
   mchehab: yeah, I usually don't care much about it... I'll look more at the patches documenting the new types at DocBook
   <br> (if you're adding new entity types)
   pinchartl: they're not documented in DocBook, it's an internal API
   <br> they're documented in kerneldoc
   mchehab: ok, I'll look the patches along the week
   <br> by the time I handle the pull requrest
   pinchartl: thanks
   <br> there's 3 patches of interest
   <br> <u>media</u>: Add obj_type field to struct media_entity
   <br> <u>media</u>: Rename is_media_entity_v4l2_io to is_media_entity_v4l2_video_device
   <br> <u>v4l</u>: subdev: Add pad config allocator and init
   <br> Hans asked me for the third one during the workshop last week
   <br> all the rest if driver code
   <br> s/rest if/rest is/
   <br> <u>mchehab</u>: regarding VFL_TYPE_META, I plan to implement support for capture only for now as we have no use case for output. is that fine with you ?
   headless: <u>pinchartl</u>: is DU HDMI still working for you?
   <br> doesn't work on SILK anymore
   pinchartl: <u>headless</u>: no, it's broken
   <br> I need to look into that
   headless: <u>pinchartl</u>: OK, good to know it's known issue :-)
   <br> no signal here
   <br> prosit