<!-- Some styling for better description lists --><style type='text/css'>dt { font-weight: bold;float: left;display:inline;margin-right: 1em} dd { display:block; margin-left: 2em}</style> pinchartl: <u>javier__</u>: thank you for http://hastebin.com/rudagowafi.coffee ***: cazzacarna has quit IRC (Ping timeout: 264 seconds) _jmleo: Hi there ! I have a HW device able to combine two buffers from two memory source pointers into a memory dest. So, I think it is a good candidate for a m2m device, but is it possible to have two capture queues and one output queue (I want to use dmabuf of course) ? hverkuil: <u>_jmleo</u>: yes, you can, but you have to use three video device nodes (2 output device nodes and one capture device node for the result). <br> Because of that you can't use the CAP_M2M capability. <br> (that's for a one video node type of m2m device only) _jmleo: mmmh, video device node is /dev/videoN, right ? So, you mean I need three nodes in the same driver... sounds weird hverkuil: well, you have two streams going into the hardware, and since a video node supports only one stream per direction at a time... <br> you could do it with two (one m2m video node, one output video node for the second stream), but that's a bit unsymmetrical. _jmleo: ok, understood, thx javier__: <u>pinchartl</u>: with the media-ctl that has field order support, things worked <br> $ media-ctl -r -l '"tvp5150 1-005c":0->"OMAP3 ISP CCDC":0[1], "OMAP3 ISP CCDC":1->"OMAP3 ISP CCDC output":0[1]' <br> $ media-ctl -v --set-format '"OMAP3 ISP CCDC":0 [UYVY2X8 720x240 field:alternate]' <br> $ media-ctl -v --set-format '"OMAP3 ISP CCDC":1 [UYVY2X8 720x240 field:interlaced-tb]' <br> and then with yavta I could capture frames with $ yavta -f UYVY -s 720x480 -n 4 --field interlaced-tb --capture=4 -F /dev/video2 <br> now I just need to find a NTSC or PAL RCA source to see if the frame-00000[0-4].bin makes sense but capture works at least :) <br> <u>pinchartl</u>: any reasons why b85761ed1820 ("media-ctl: Add field support for the media bus format") was not pushed to v4l2-utils upstream? <br> *v4l-utils ***: teob has left javier__: also tested on top of mc next gen and capture works withou issues http://hastebin.com/qekakizazu.coffee hverkuil: <u>bparrot</u>: ping <br> <u>bparrot</u>: ping pinchartl: <u>javier__</u>: mostly because there was no user in mainline I think hverkuil: <u>pinchartl</u>: ready when you are <br> and good morning! javier__: <u>pinchartl</u>: I see ***: prabhakarlad has left hverkuil: <u>bparrot</u>: regarding: https://patchwork.linuxtv.org/patch/31427/ <br> can you post an updated version with the 'value64' fix? I can confirm that that is needed. I'd like to add this fix to my pull request for tomorrow. <br> If you don't have time, is it OK if I update your patch? <br> I'd really like to get it out of the way before I go on vacation with limited availability for the next 2 1/2 weeks. pinchartl: <u>hverkuil</u>: I am now <br> sorry for the delay hverkuil: no problem pinchartl: so <br> regarding the control stores/requests API <br> I'm working on this CSI-2 camera <br> it has one sensor <br> and can output multiple streams (think of YUV and JPEG at the same time for instance, or YUV unscaled and YUV scaled down) <br> all output frames relate to the same sensor frame <br> the camera can also output metadata <br> it's more or less a list of ID + value in a binary form, nothing very special <br> the streams and metadata are multiplexed over a single CSI-2 port <br> they're transferred by the camera to the host <br> where the host CSI-2 receiver demultiplexes them <br> the host (and the camera driver on the host) have no idea of the internal architecture of the camera <br> from their point of view it's a black box with one sensor and multiple streams <br> the way the camera is implemented will actually vary from camera to camera <br> the control protocol implemented by the camera is quite high-level <br> it consists of just a handful of operations <br> - get_capabilities returns a blob of capabilities (a list of ID + value) <br> - configure_streams is very similar to S_FMT, it takes width, height, format and returns the closest supported parameters, but does so for all streams at the same time <br> - request_capture submits a capture request, which is a list of parameters (against ID + value) <br> - flush flushes the pending capture requests <br> that's pretty much it <br> the control protocol is still in the design phase so it can be changed <br> I have several points I wanted to discuss with you <br> the first one is related to the configure_streams operation <br> as I mentioned it maps very easily to S_FMT <br> but it configures all streams in one go, while S_FMT is implemented per video node <br> and I'm wondering how to map that <br> I could just store the S_FMT format internally and issue a configure_streams command at STREAMON time <br> but that's a bit of a hack <br> as S_FMT would then not be able to return the closest match, it would always succeed, and STREAMON would fail if the configuration can't be achieved exactly hverkuil: Right. The way I planned to implement this is as follows: <br> A new compound control would be created that is effectively a struct v4l2_format. This control would be internal only, so hidden from userspace. <br> In userspace you would use V4L2_REQ_CMD_BEGIN with the request ID, call S_FMT and call CMD_END. <br> Internally the format struct would be stored in the internal control for the given request ID. <br> Ideally this would be done fully automatic by the v4l2 core. <br> When the request is applied the s_ctrl() is called for the internal control and the driver has to configure the format accordingly. <br> I did a partial implementation of this for the selection API once, but I don't think it is in my main requests branch. pinchartl: ok, there's two things there <br> first of all, the android camera hal doesn't handle formats as part of the requests <br> it configures formats when the stream is stopped in a separate operation <br> and then later uses requests to capture frames <br> (the configure_streams, request_capture and flush operations that the camera module implements directly come from android) <br> (but as I said, they could be changed) <br> then, I'd like a way for S_FMT to report the format actually configured <br> which can be different than the requested format <br> this could still go through a control store <br> as long as it could be applied before stream starts, and that the driver could easily identify it as a format-only store hverkuil: Erm, I think I misunderstood you. Does this have anything to do with requests at all, I wonder? pinchartl: however, reporting the actual format would be more difficult <br> configure_streams is unrelated to requests <br> but could possibly be implemented using the request API <br> have you had a look at the android camera hal v3 API ? hverkuil: somewhat, but since I have never actually used it I can't claim to be an expert :-) pinchartl: ok :-) <br> http://source.android.com/devices/halref/structcamera3__device__ops.html <br> there are 8 operations <br> only configure_streams, process_capture_request and flush are relevant for this discussion <br> configure_streams is the first operation that is called <br> always when the streams are stopped <br> it will set the width, height, format and buffer parameters (number of buffers to be allocated for instance) for each stream hverkuil: You can set the formats using the mechanism described above to a specific request ID that will only contain the formats, and then call V4L2_REQ_CMD_APPLY <br> to apply them all. The driver can collect all the formats and configure them all in a single call to the camera. pinchartl: for all streams in one go <br> how would you then report the result ? through G_FMT on the same request ID ? hverkuil: yes. pinchartl: that should work <br> so you would, on each video node, call V4L2_REQ_CMD_BEGIN() - S_FMT() <br> then call V4L2_REQ_CMD_APPLY() on one of them <br> and finally call G_FMT() to retrieve the result <br> on each video node <br> correct ? hverkuil: To be precise: to set it is V4L2_REQ_CMD_BEGIN() - S_FMT() - V4L2_REQ_CMD_END() <br> and to get: V4L2_REQ_CMD_BEGIN() - G_FMT() - V4L2_REQ_CMD_END() pinchartl: could I keep the request "open" (avoid the first V4L2_REQ_CMD_END and the second V4L2_REQ_CMD_BEGIN) ? hverkuil: Good question, I had to look that up. Yes, that's possible. So BEGIN - S_FMT - APPLY - G_FMT - END. pinchartl: that should work <br> I'll give it a try <br> by the way, if you see changes that could make the protocol better, please feel free to tell me <br> I was thinking about having an operation to configure streams one at a time hverkuil: But note that I have not made any support yet for such 'hidden' internal controls. I didn't know what would be needed and I didn't want to start coding when it wasn't even certain when this patch series would be merged. pinchartl: but the problem with this approach is that the streams are not fully independent hverkuil: I recommend that initially you just make a 'normal' compound control to do the proof-of-concept. pinchartl: sure <br> I can handle the implementation hverkuil: http://git.linuxtv.org/cgit.cgi/hverkuil/media_tree.git/log/?h=confstore pinchartl: so you think it's better, at the protocol level, to configure all streams in one command than to use one command per stream ? hverkuil: That old branch contains code for the selection API. <br> It's old so I don't know how useful it is. <br> If there are dependencies (i.e. if stream 1 has format X, then stream 2 cannot have format Y), then doing this per stream can be difficult. Having an atomic configure_streams would simplify that. <br> But it comes at a cost. <br> In v4l2 we have similar problems (e.g. first configuring format, then crop, or the other way around). In our case we attempt to get the best match for the last called ioctl. pinchartl: that's exactly the conclusion I came to <br> I was trying to see whether I could expose enough information to make the dependencies available to the host side, but I don't think it can be done cleanly in the general case hverkuil: I don't think so either. pinchartl: I'll try conf stores for that then <br> I then have a second question <br> regarding the requests <br> the way it works is that V4L2_REQ_CMD_APPLY() will call to the control framework and result in a s_ctrl call for every control in the store, right ? hverkuil: not necessarily. <br> let me dig into the code, it's been a while... <br> Ah, yes, now I remember. <br> Yes, CMD_APPLY will indeed call the control framework and s_ctrl is called whereever that is needed. <br> This can be changed, of course, but my reasoning was that since APPLY can happen any time there is no need for any specific driver code to ensure synchronization with frames. pinchartl: depending on the driver it might be easier to have a central state (conf store) from which information can be pulled instead of pushing individual controls one by one hverkuil: CMD_QUEUE *does* go to a driver-provided callback, since there you would expect special code to ensure proper synchronization. pinchartl: especially in this case, the driver will need to construct one request blob to be sent to camera from all the controls in the store <br> ah right <br> I don't need CMD_APPLY here obviously, I need CMD_QUEUE <br> or to be precise I'd use CMD_APPLY for formats and CMD_QUEUE for the requests <br> that makes sense hverkuil: it does? pinchartl: doesn't it ? :-) hverkuil: You're not queuing buffer, you just want to set the formats for all the streams in a single call to the camera, right? pinchartl: yes hverkuil: And then APPLY is annoying because it goes straight to v4l2_ctrl_apply_request() instead of giving the driver the opportunity to inspect the controls that are applied. <br> Hmm, actually, APPLY doesn't work since it is per-stream. <br> Wait,it's too long ago :-) pinchartl: :-) <br> as I understand it, APPLY will immediately apply a set of parameters, using s_ctrl() <br> while CMD_QUEUE will queue the store to be applied later <br> more precisely to be applied with the corresponding buffers <br> CMD_QUEUE should thus not call s_ctrl hverkuil: True for APPLY. And APPLY is per-stream, i.e. it only applies the request set for that specific stream. pinchartl: I'll give all this a try <br> another thing that bothers me is the metadata that is sent back by the device <br> and to some extent the controls stored in the request too <br> vendors will be allowed to implement custom controls hverkuil: CMD_QUEUE is global (i.e. you only call it for a single stream, but it applies to all streams). It is really meant to be used while streaming where you prepare buffers for each stream + and requests, then CMD_QUEUE queues up all those prepared streams and is responsible for setting the new request values at the right time. pinchartl: and the format of the metadata will be allowed to be vendor-specific <br> in which case parsing will be performed by userspace hverkuil: What you want is an APPLY_GLOBAL. Just set the requests for all streams, nothing else. pinchartl: the kernel driver will receive a metadata buffer and pass it to userspace <br> using a buffers queue <br> that shouldn't be much of an issue <br> but in the other direction I wonder how to handle vendor-specific controls as they won't be known by the kernel driver hverkuil: I.e. an req_apply callback next to the req_queue callback. Easy enough to add. pinchartl: right, I'd need some form of global APPLY hverkuil: with vendor-specific you mean camera-specific? pinchartl: yes <br> there will be a single kernel driver <br> but multiple cameras, developed by multiple vendors <br> a bit like in the UVC case hverkuil: how do you plan to pass these vendor-specific data to the camera? What API is used there? <br> just a binary blob that is passed to the camera? pinchartl: I don't know yet <br> that would be the easiest, but I dislike it as much as you do :-) ***: ocrete has quit IRC (Ping timeout: 250 seconds) hverkuil: Worst case you would need to know the size of each vendor 'control' by querying the camera, and then you can create u8 array controls of that size. Those can then be used in requests. <br> Ideally you get more metadata, but knowing the size is the minimum you need. <br> basically QUERYCTRL :-) pinchartl: I'll see what can be done <br> I'll have to experiment with that <br> and check performances too, as we're talking of about 300 controls per request <br> ok, I need to go I'm afraid, or I'll miss breakfast <br> and the day will be long so I don't want to miss it :-) hverkuil: Make sure that you give a good hint when calling v4l2_ctrl_handler_init(). If the number of controls is too far off from the real number, then the hash will be inefficient. <br> enjoy! <br> bon appetit pinchartl: thank you <br> and thank you for your help bparrot: hverkuil, pong, if you press for time, yeah go ahead and update my patch for the value64. I have not looked at the details yet, so i don't know why it is needed. In my testing it worked just fine as is. But my testing was not comprehensive <br> hverkuil, oh i see <br> i'll resend in about 5 mins if that still helps hverkuil: <u>bparrot</u>: yes, that would help! bparrot: ok np <br> hverkuil, done ***: awalls has left