<!-- Some styling for better description lists --><style type='text/css'>dt { font-weight: bold;float: left;display:inline;margin-right: 1em} dd { display:block; margin-left: 2em}</style>

   sailus: <u>ttomov</u>: Hello!
   ttomov: Hi sailus
   <br> <u>sailus</u>: I thought that Rob will reply to the ov5645 thread but I think that I'll just resend the patches and let's see his opinion then
   hverkuil: <u>mchehab</u>: https://patchwork.linuxtv.org/patch/36729/
   <br> <u>mchehab</u>: This pull request was accepted, but I don't see the patches in your fixes branch. Just want to verify that there wasn't a mistake.
   sailus: <u>ttomov</u>: Yeah, please resend.
   <br> And feel free to ping Rob, too.
   mchehab: <u>hverkuil</u>: I remember I handled those
   <br> but not seeing on my tree
   hverkuil: that's why I wondered what happened to them.
   mchehab: or maybe I'm confused with some other CEC patches
   <br> anyway, re-added to my queue
   <br> thanks for noticing!
   <br> pinchartl, hverkuil, sailus: we can discuss the V4L2 meta device in 30 mins from now
   pinchartl: <u>mchehab</u>: I think Hans is busy today, but let's see
   mchehab: tomorrow should be ok for me as well
   pinchartl: it would work for me too
   <br> <u>mchehab</u>: have you given any thought about whether the HGT patches could be merged already ?
   mchehab: <u>pinchartl</u>: I prefer to discuss about that first... we have just 3 usecases for the meta device right now, and only HGT seems to have a buffer size that doesn't change
   <br> the other two usecases have it changed, depending on some parameters that aren't associated with the fourcc
   pinchartl: ok, no problem
   <br> let's wait for Hans then
   mchehab: so, before adding a new V4L device type, we need to discuss its API to fulfill all usecases
   <br> ok
   pinchartl: I'll likely send another VSP pull request this week without the metadata API
   <br> (note that there's no new device type, just a new buffer type and format)
   <br> on a semi-related note, Ricardo has sent a pull request for HSV formats
   <br> there's one VSP patch in it that I have acked
   <br> it shouldn't conflict with the rest of what I have
   <br> but if you merge that soon I'll rebase my VSP branch before sending you the pull request
   hverkuil: <u>pinchartl</u>: mchehab: I read the backlog of the original metadata discussion.
   <br> It's very busy, so I don't know how much I can contribute to the discussion planned today, but I do have one question and one comment.
   pinchartl: <u>hverkuil</u>: go ahead
   ***: JCT has quit IRC (Quit: Leaving)
   hverkuil: The question is for Laurent: how do you see the implementation for this on the omap3: using control(s) to set the windows? It wasn't clear to me what you intended to do (but I read the backlog quickly, so I may well have missed it).
   <br> The comment is that we already have a control that affects the format: V4L2_CID_ROTATE.
   <br> In all honestly though, the interaction between that control and the format is not very well defined. I would have to check drivers how they do it.
   mchehab: hmm... Does V4L2_CID_ROTATE alter the buffer size?
   pinchartl: <u>mchehab</u>: if the stride is different than the width (i.e. if there's padding at the end of the lines), it can
   hverkuil: It can if line have to be padded for example. If it actually does so in existing drivers, someone would have to check.
   pinchartl: <u>hverkuil</u>: for OMAP3, I would expose the statistics engine parameters through controls, yes
   <br> obviously the outter crop area would be configured using the selection API
   <br> and the subsampling as well (through scaling)
   <br> but the number of subwindows would be configured through control
   <br> that is if we decide to use controls
   <br> if we want to convey that information through S_FMT, we will need to have per-driver buffer types (or possibly sub-types)
   sailus: <u>hverkuil</u>: Flipping controls affect the pixel format on raw bayer formats, too.
   hverkuil: <u>sailus</u>: true: an even better example.
   pinchartl: we do have several controls that affect the way the content of a buffer is to be interpreted by userspace
   <br> and we have at least one control (V4L2_CID_ROTATE) that can affect the size of the buffer as well
   ndufresne: <u>pinchartl</u>: about FDP1, how one should be able to differenciate between a deinterlacer and a colorconverter/scaler ?
   <br> the usage semantic is different, and the deinterlacer may introduce latency
   pinchartl: <u>ndufresne</u>: one can't :-)
   ndufresne: /o\ so much more work will be needed
   <br> the idea behind my question is that it would be nice if we don't have to have some board configuration to identify what is what, specially that videoN allocation is associated with the boot order (which may change from time to time)
   pinchartl: m2m devices can do pretty much anything, V4L2 doesn't report their purpose to userspace
   <br> we could also have m2m devices with multiple video nodes
   ndufresne: we would, but we don't, that would not work if the m2m framework of course
   <br> * we could
   <br> some m2m can be identitied already though their format list, but for transformation, it's all obscure
   pinchartl: https://github.com/kbingham/gst-plugins-good/commit/e28b6aeae384497de57be77b51dcea7684fa3c6f
   <br> removing interlace-mode is needed to support deinterlacers
   ndufresne: indeed, but it's not sufficient
   pinchartl: it took me a day to debug that
   <br> resulting in http://www.ideasonboard.org/blog/20160805-gst-debugging.html
   ndufresne: as just removing it blindly will allow otherconverter to negotiatied mixed framerate, which will just be wrong
   <br> you need to program the semantic properly, and for that you need to know this is a deinterlacer ;-P
   pinchartl: you could enumerate the supported fields on both the input and output
   ndufresne: I guess we are already doing that though, have yhou checked that it works ?
   <br> oh, no, just listing does not work of course
   <br> a color converter, or scaler that receives 30pfs, should produce 30fps, anything else is a lie
   <br> a deinterlace that receive 60fps, maybe produce 60 or 30 I guess, and there is a bunch of value that are not valid
   <br> So the question is, will the driver fixate the capture rate properly, and does it get reflected in gst caps
   <br> if it's the case, then yes, that patch is correct
   pinchartl: I don't think the driver even cares about the rate
   ndufresne: if the driver does not, then you need semantic on gst side ;-P
   <br> (which require to know what transformation is happening ;-P)
   pinchartl: I think the driver could though
   <br> but that wouldn't be enough to solve the issue at hand of gstreamer assuming that m2m devices produce a single output buffer per input buffer, would it ?
   ndufresne: no, but Kiram had started a patch for that no ?
   <br> currently the GstV4l2Transform object assume one in, one out, as a simplication
   kbingham: <u>ndufresne</u>: for gstreamer I don't think I got very far getting interlaced content to go through m2m-transform. In the end I tested deinterlacing by writing my own c-code.
   <br> :(
   ndufresne: the ideal method (the most generic), would be to have a dq thread for the capture side, so we don't artifically add latency (though waiting for the next incoming buffer), but GstBaseTransform have limitation
   kbingham: I'd spent two days getting not very far and was very close to deadline - so I had to adapt.
   pinchartl: <u>ndufresne</u>: any chance that would be on your to-do list ? :-)
   ndufresne: <u>kbingham</u>: imho, someone should provide simplier C code to test driver like m2m, but you'll quickly realise that each test depends on what the M2M driver is
   <br> <u>pinchartl</u>: it's quite far atm, if someone would pay for it, that wouldn't take that long ;-P
   pinchartl: :-)
   <br> I don't think it's a priority for Renesas
   <br> but maybe it will become one at some point
   ndufresne: <u>pinchartl</u>: also, one of the big slow down atm is that we'd like to use this for live work, but V4L2 nodes are not advertising their latency ...
   <br> so the biggest slow down, m2m not being classified, forcing into guessing and making extra complicated code to be generic
   pinchartl: the whole topic of latency needs to be discussed
   ndufresne: and the fact we have no idea what will be the latency for the processing (latency being the amount of pre-buffering here)
   pinchartl: exposing it, and minimizing it
   <br> (dequeuing buffers before they're fully processed for instance)
   ndufresne: ^ for this one in particular, the expose latency should be the same
   <br> since if you don't wait at that node, you'll wait somewhere else, but you should safe on few overheads I believe
   <br> fyi, at least in gst, the latency excludes the processing time
   <br> that's because processing time is unfixed, it could be the sum of processing time, but if you introduce thread in-between, it will ideally be the maximum, though you system could be saturated, so you endup with something in the middle
   <br> <u>pinchartl</u>: kbingham: another note, I'm sure there is m2m that will be implementing deinterlace, color conversion, scaling and rotation all in 1 pass (to save memory bandwidth)
   pinchartl: s/deinterlace/alpha blending/ and I have such a device already :-)
   ndufresne: yeah, blitters are odd
   <br> but jmleo have implement a gst video mixer with one of those (on IMX.6)
   pinchartl: nice
   <br> I have to run I'm afraid, I'll be back later
   kbingham: <u>ndufresne</u>: pinchartl: Yes, a short jump to change GstV4l2Transform to support a separate input and output device and we'll have more use cases for it :D
   ***: benjiG has left
   ndufresne: <u>kbingham</u>: I would add this feature in a menually configured element only, I don't believe those multi device m2m can be auto-plugged in the current state
   <br> for this purpose, https://bugzilla.gnome.org/show_bug.cgi?id=742918
   kbingham: <u>ndufresne</u>: yes, there would certainly be 'other configuration' required - So I think I agree.
   ndufresne: so I could introduce device property (read/write) with a capture-device, which by default is unset (meaning a single device), and then it's all application specific
   <br> we'll need to duplicate the extra-controls too
   ***: awalls1 has left