<!-- Some styling for better description lists --><style type='text/css'>dt { font-weight: bold;float: left;display:inline;margin-right: 1em} dd { display:block; margin-left: 2em}</style>

   tfiga: have some urgent stuff for today, hopefully can look at other things from tomorrow
   bbrezillon: hverkuil, pinchartl, tfiga: I think I have some basic v4l2-compliance tests for the EXT_FMT and EXT_BUF ioctls
   <br> + vivid and vimc drivers patched to implement the ext hooks
   <br> should I wait your review on the RFC before sending a new version?
   hverkuil: You might just as well post the latest version, then that's what I'll review.
   bbrezillon: ok, I'm preparing the patches
   <br> koike, hverkuil: might be something someone already fixed, but I had to fix a NULL pointer dereference in vimc http://code.bulix.org/9i9ho4-649196
   ***: mmattice has quit IRC (Ping timeout: 268 seconds)
   hverkuil: <u>bbrezillon</u>: weird, I haven't come across that vimc issue. koike, can you take a look?
   <br> <u>mripard</u>: ping
   mripard: <u>hverkuil</u>: pong
   <br> hi!
   hverkuil: I'm looking at your v6 of the h264 series. Is this the final version or is a v7 planned?
   <br> I actually see some small issues in v6 (will reply), so I think a v7 is needed anyway.
   mripard: there hasn't been any comment at the moment besides tfiga's reviewed-by
   <br> ok, then there will be a v7 :)
   hverkuil: OK. Did you look at what is needed for a stateless h264 encoder?
   <br> (does cedrus support h264 encoding?)
   mripard: not yet
   <br> we don't
   <br> and it's not clear to me whether the encoder is stateful or stateless yet
   <br> it looks stateful but that would be odd to have a stateless decoder and a stateful ecoder
   hverkuil: Based on what I've seen for vicodec and mpeg2 it would need almost the same data structures, except that they are filled by the driver.
   <br> The main difference is that userspace wouldn't keep references to reference frames (i.e. the timestamp field in v4l2_h264_dpb_entry) since that needs to be done internally in the driver.
   <br> This made me wonder whether the timestamps should be stored in a separate control that is only created for stateless decoders. Otherwise you would have a field that is ignored when used by the stateless encoder.
   mripard: that makes sense
   koike: <u>hverkuil</u>: bbrezillon: this NULL pointer is weird, I'll take a look
   bbrezillon: <u>koike</u>: BTW, I think it's even simpler to match against dev instead of entity_name
   <br> http://code.bulix.org/kzdyxj-649329
   hfr: <u>sailus</u>: Hi Sakari, are you there ? I want to discuss about the support of multiple subdev in DCMI to support CSI bridge: https://lkml.org/lkml/2019/4/1/298
   sailus: <u>hfr</u>: Hi!
   <br> Yeah, I have a moment. Should I read the patches first?
   <br> And please cc me on the next time.
   hfr: yes it's quite straightforward
   <br> it's related on what the stmipid02 you're currently reviewing
   <br> this is the camera interface part
   sailus: Ok.
   hfr: I have several questions on how to deal with two subdevs now, compared to a single one previously
   sailus: Go ahead.
   hfr: for ex. for the formats and controls, previously I was exposing the single subdev ones, but now I must find the "camera sensor" subdev in the whole pipeline to do the same
   <br> media_device_for_each_entity(entity, &amp;dcmi-&gt;mdev)
   <br> if (entity-&gt;function == MEDIA_ENT_F_CAM_SENSOR)
   <br> dcmi-&gt;entity.subdev =
   <br> media_entity_to_v4l2_subdev(entity);
   <br> in dcmi_graph_notify_complete()
   <br> you already made the remark on stmipid02 that searching for a type of subdev is not good, what if not a camera sensor ?
   <br> but I don't see how to do other way presently
   pinchartl: <u>hfr</u>: why does your CSI-2 receiver need to locate the camera sensor ?
   <br> it shouldn't matter much, it should just interact with whatever subdev is connected to its input
   <br> regardless of whether it's a camera sensor, an HDMI to CSI-2 bridge or anything else
   hfr: for expose to expose camera sensor controls on V4L2 side: exposure, contrast, etc...
   <br> if I ask the subdev I'm connected on, it's the bridge, so no such controls...
   pinchartl: you shouldn't do so. controls should be exposed on the respective subdevs
   <br> or, alternatively, you can expose *all* controls for all subdevs on the video device nodes, using control handler inheritance
   <br> but not just selected sensor controls
   <br> it should be all or nothing
   <br> (and I'd recommend nothing, exposing them on subdev nodes instead)
   hfr: let me explain what is current setup
   sailus: <u>pinchartl</u>: I think that could make sense on non-MC-enabled drivers, but not on the MC-enabled ones.
   hfr: DCMI =&gt; OV5640 //
   <br> quite simple
   <br> V4L2 layer exposes camera sensor controls
   <br> means G_/S_CTRLS
   pinchartl: <u>sailus</u>: "that" = "exposing them all on the video device node", or = "exposing them on their respective subdevs" ?
   sailus: <u>pinchartl</u>: On the video node.
   pinchartl: <u>sailus</u>: agreed
   sailus: That is also the current state.
   hfr: now I have bridge in between: but in my opinion this should change nothing on user side
   <br> it's just a matter of data transmission, it's not changing any features
   <br> so I would expect that user sees the same controls as before
   sailus: That's a problematic situation, as you have an existing driver and you want to add more functionality.
   <br> One possibility could be to add a Kconfig option. Another is to accept the interface will be different.
   <br> I think it depends on the users as well, on what do they expect.
   hfr: I just want to use the CSI version of OV5640n there is no change in functionalities at all
   sailus: Both interfaces are valid and fully supported.
   hfr: just device tree changes
   <br> to keep legacy I can expose all controls
   <br> of all subdevs, including bridge
   <br> is that OK ?
   <br> (I don't know yet how I will redirect from one subdev to another subdev depeding on control but I will check that afterward)
   sailus: Depending whether you have MC-centric driver, you need changes to driver interfaces and functionality based on that; it's not only about controls.
   hfr: this is where I'm not clear
   sailus: It's a curious case where a piece of hardware that was not considered MC-centric suddenly is.
   <br> Another option could be a module parameter.
   <br> The driver isn't overly complicated at the moment so I guess this is a possibility, too; it's nearly the same than a Kconfig option anyway.
   hfr: module parameter ot Kconfig to do what ?
   sailus: To change the driver to be MC-centric or not.
   hfr: MC-centric means that user may change its code and use now media-ctl to set formats and controls ?
   sailus: It means the user needs to configure the MC pipeline before streaming on the device.
   hfr: ok but user then will set formats and controls on V4L2 interface, what will happen then ?
   <br> for example "gst-launch v4l2src ..." command line
   sailus: Streaming may be started on the video node, just as with plain V4L2.
   hfr: first thing that will do v4l2src plugin is to negotiate format through G_/S_FMT api, and when all is negotiated STREAMON will be sent
   <br> no just STREAMON
   <br> it's why I'm really confuse going to MC and subdev
   <br> regarding what is done currently based on V4L2
   sailus: Sounds like you'll need libcamera there. :-)
   <br> That's really the problem (or one of the problems) it's intended to address.
   <br> <u>pinchartl</u>: How do things stand with libcamera nowadays?
   pinchartl: <u>sailus</u>: for capture pipelines without an ISP it would be entirely feasible to write a generic gstreamer element, libcamera isn't required
   sailus: <u>pinchartl</u>: Yeah, you could do that, too.
   pinchartl: (I haven't checked if there's an ISP in the STM pipeline)
   hfr: none
   pinchartl: but otherwise, libcamera is doing fine. better than the IPU3 driver in any case ;-)
   sailus: X-)
   hfr: will libcamera change all the GStreamer V4L2 calls to MC calls transparently ?
   <br> does someone tested this already ?
   sailus: I need to leave now, back tomorrow if not this evening.
   pinchartl: <u>hfr</u>: libcamera will require a sepcific gstreamer element
   sailus: Bye!
   hfr: ok thks Sakari, could we continue tomorrow ?
   pinchartl: it's work in progress, I don't expect support in gstreamer before Q4
   hfr: ok thks Laurent
   <br> anyway I will try to keep legacy behaviour with basic V4L2 as much as I can, I don't feel it's a big deal to change DCMI
   <br> Controls seems OK, I need now to dig more into formats negotiations in order that formats of bridge and sensor matches
   ***: benjiG has left
   bbrezillon: pinchartl, hverkuil: I'm currently trying to test the "multi-plane buf pointing to the same dmabuf, each plane at a different offset"
   <br> question is, and how should we allocate such a buffer from v4l2-compliance?
   <br> should I extend the EXT_CREATE_BUF ioctl to support this case?
   pinchartl: I don't think so. this really aims at the import use case, I don't think we need to export such buffers
   bbrezillon: (add a flags field + a V4L2_EXT_CREATE_BUFS_FL_PLANES_SHARE_BUF flag)
   pinchartl: but that leaves your question unanswered :-)
   bbrezillon: there's UDMABUF
   pinchartl: we clearly need a central allocator...
   bbrezillon: never tried using it
   pinchartl: would UDMABUF support that ?
   <br> if it does, it's an option
   bbrezillon: and I don't know if it works the way we want
   <br> would be much simpler than trying to modify the VB2 core to support this use case :)
   <br> second question is, should we expose the plane alignment constraints in v4l2_ext_format?
   <br> I need to know it in order to know the buf size and then to pass the appropriate plane-&gt;start_offset to the EXT_QBUF request
   <br> but is it something we expect userspace to figure out on its own, or should the framework expose it?
   pinchartl: I don't think we can meaningfully expose it in a generic way
   <br> it's a largely unsolved problem, kernel-wide
   <br> or Linux-wide
   <br> if you can find a solution everybody will love you :-)
   bbrezillon: <u>pinchartl</u>: I thought the video dev would at least be able to expose its own alignment constraints
   pinchartl: <u>bbrezillon</u>: can you come up with a reasonable API that can express all possible alignment constraints ? :-)
   bbrezillon: depends on what you mean by all possible alignment constraints
   pinchartl: any constraint that a device may have
   bbrezillon: I'm just interested in plane buf alignment constraint right now :)
   <br> so it's basically "plane buf should be aligned on X bytes" where X would probably be a power of 2
   <br> <u>pinchartl</u>: maybe it's simpler if you give me one of those funky use case you have in mind :)
   pinchartl: can you guarantee that there will never be any device requiring other type of alignment constraints ?
   <br> there are existing devices that require planes to be allocated from different DRAM banks for instance
   bbrezillon: hm, not sure this qualify as an alignment constraint
   pinchartl: is qualifies as a constraint on memory allocation. I'm not sure reporting partial constraints would be that useful as it won't solve the overall problem
   bbrezillon: anyway, if we don't expose the alignment constraint what kind of policy should I use in the v4l2-compliance test supposed to test that?
   pinchartl: what do you want to test exactly ?
   bbrezillon: or should I give up on this generic "mutiplane buffer single dmabuf + different offsets"
   <br> test
   <br> ?
   pinchartl: the get/set format API allows you to negotiate a bytesperline value, which reports some alignments constraints. could this be used ?
   bbrezillon: yep, it should work
   <br> it's definitely not encoding the real alignment constraint, but it should be large enough to work for most use cases
   ***: Whoopie has quit IRC (Quit: ZNC - http://znc.in)