↑back Search ←Prev date Next date→ Show only urls | (Click on time to select a line by its url) |
Who | What | When |
---|---|---|
*** | awalls has left | [00:10] |
............................................... (idle for 3h52mn) | ||
pinchartl | javier__: thank you for http://hastebin.com/rudagowafi.coffee | [04:02] |
....... (idle for 30mn) | ||
*** | cazzacarna has quit IRC (Ping timeout: 264 seconds) | [04:32] |
....................................... (idle for 3h10mn) | ||
_jmleo | Hi there ! I have a HW device able to combine two buffers from two memory source pointers into a memory dest. So, I think it is a good candidate for a m2m device, but is it possible to have two capture queues and one output queue (I want to use dmabuf of course) ? | [07:42] |
hverkuil | _jmleo: yes, you can, but you have to use three video device nodes (2 output device nodes and one capture device node for the result).
Because of that you can't use the CAP_M2M capability. (that's for a one video node type of m2m device only) | [07:44] |
_jmleo | mmmh, video device node is /dev/videoN, right ? So, you mean I need three nodes in the same driver... sounds weird | [07:52] |
hverkuil | well, you have two streams going into the hardware, and since a video node supports only one stream per direction at a time...
you could do it with two (one m2m video node, one output video node for the second stream), but that's a bit unsymmetrical. | [07:55] |
_jmleo | ok, understood, thx | [08:05] |
............................. (idle for 2h21mn) | ||
javier__ | pinchartl: with the media-ctl that has field order support, things worked
$ media-ctl -r -l '"tvp5150 1-005c":0->"OMAP3 ISP CCDC":0[1], "OMAP3 ISP CCDC":1->"OMAP3 ISP CCDC output":0[1]' $ media-ctl -v --set-format '"OMAP3 ISP CCDC":0 [UYVY2X8 720x240 field:alternate]' $ media-ctl -v --set-format '"OMAP3 ISP CCDC":1 [UYVY2X8 720x240 field:interlaced-tb]' and then with yavta I could capture frames with $ yavta -f UYVY -s 720x480 -n 4 --field interlaced-tb --capture=4 -F /dev/video2 now I just need to find a NTSC or PAL RCA source to see if the frame-00000[0-4].bin makes sense but capture works at least :) pinchartl: any reasons why b85761ed1820 ("media-ctl: Add field support for the media bus format") was not pushed to v4l2-utils upstream? *v4l-utils | [10:26] |
.... (idle for 17mn) | ||
*** | teob has left | [10:51] |
javier__ | also tested on top of mc next gen and capture works withou issues http://hastebin.com/qekakizazu.coffee | [10:52] |
...... (idle for 29mn) | ||
hverkuil | bparrot: ping | [11:21] |
....... (idle for 32mn) | ||
bparrot: ping | [11:53] | |
...................... (idle for 1h49mn) | ||
pinchartl | javier__: mostly because there was no user in mainline I think | [13:42] |
hverkuil | pinchartl: ready when you are
and good morning! | [13:47] |
javier__ | pinchartl: I see | [13:55] |
*** | prabhakarlad has left | [14:01] |
hverkuil | bparrot: regarding: https://patchwork.linuxtv.org/patch/31427/
can you post an updated version with the 'value64' fix? I can confirm that that is needed. I'd like to add this fix to my pull request for tomorrow. If you don't have time, is it OK if I update your patch? I'd really like to get it out of the way before I go on vacation with limited availability for the next 2 1/2 weeks. | [14:05] |
pinchartl | hverkuil: I am now
sorry for the delay | [14:09] |
hverkuil | no problem | [14:10] |
pinchartl | so
regarding the control stores/requests API I'm working on this CSI-2 camera it has one sensor and can output multiple streams (think of YUV and JPEG at the same time for instance, or YUV unscaled and YUV scaled down) all output frames relate to the same sensor frame the camera can also output metadata it's more or less a list of ID + value in a binary form, nothing very special the streams and metadata are multiplexed over a single CSI-2 port they're transferred by the camera to the host where the host CSI-2 receiver demultiplexes them the host (and the camera driver on the host) have no idea of the internal architecture of the camera from their point of view it's a black box with one sensor and multiple streams the way the camera is implemented will actually vary from camera to camera the control protocol implemented by the camera is quite high-level it consists of just a handful of operations - get_capabilities returns a blob of capabilities (a list of ID + value) - configure_streams is very similar to S_FMT, it takes width, height, format and returns the closest supported parameters, but does so for all streams at the same time - request_capture submits a capture request, which is a list of parameters (against ID + value) - flush flushes the pending capture requests that's pretty much it the control protocol is still in the design phase so it can be changed I have several points I wanted to discuss with you the first one is related to the configure_streams operation as I mentioned it maps very easily to S_FMT but it configures all streams in one go, while S_FMT is implemented per video node and I'm wondering how to map that I could just store the S_FMT format internally and issue a configure_streams command at STREAMON time but that's a bit of a hack as S_FMT would then not be able to return the closest match, it would always succeed, and STREAMON would fail if the configuration can't be achieved exactly | [14:10] |
hverkuil | Right. The way I planned to implement this is as follows:
A new compound control would be created that is effectively a struct v4l2_format. This control would be internal only, so hidden from userspace. In userspace you would use V4L2_REQ_CMD_BEGIN with the request ID, call S_FMT and call CMD_END. Internally the format struct would be stored in the internal control for the given request ID. Ideally this would be done fully automatic by the v4l2 core. When the request is applied the s_ctrl() is called for the internal control and the driver has to configure the format accordingly. I did a partial implementation of this for the selection API once, but I don't think it is in my main requests branch. | [14:20] |
pinchartl | ok, there's two things there
first of all, the android camera hal doesn't handle formats as part of the requests it configures formats when the stream is stopped in a separate operation and then later uses requests to capture frames (the configure_streams, request_capture and flush operations that the camera module implements directly come from android) (but as I said, they could be changed) then, I'd like a way for S_FMT to report the format actually configured which can be different than the requested format this could still go through a control store as long as it could be applied before stream starts, and that the driver could easily identify it as a format-only store | [14:26] |
hverkuil | Erm, I think I misunderstood you. Does this have anything to do with requests at all, I wonder? | [14:29] |
pinchartl | however, reporting the actual format would be more difficult
configure_streams is unrelated to requests but could possibly be implemented using the request API have you had a look at the android camera hal v3 API ? | [14:29] |
hverkuil | somewhat, but since I have never actually used it I can't claim to be an expert :-) | [14:30] |
pinchartl | ok :-)
http://source.android.com/devices/halref/structcamera3__device__ops.html there are 8 operations only configure_streams, process_capture_request and flush are relevant for this discussion configure_streams is the first operation that is called always when the streams are stopped it will set the width, height, format and buffer parameters (number of buffers to be allocated for instance) for each stream | [14:31] |
hverkuil | You can set the formats using the mechanism described above to a specific request ID that will only contain the formats, and then call V4L2_REQ_CMD_APPLY
to apply them all. The driver can collect all the formats and configure them all in a single call to the camera. | [14:32] |
pinchartl | for all streams in one go
how would you then report the result ? through G_FMT on the same request ID ? | [14:32] |
hverkuil | yes. | [14:34] |
pinchartl | that should work
so you would, on each video node, call V4L2_REQ_CMD_BEGIN() - S_FMT() then call V4L2_REQ_CMD_APPLY() on one of them and finally call G_FMT() to retrieve the result on each video node correct ? | [14:34] |
hverkuil | To be precise: to set it is V4L2_REQ_CMD_BEGIN() - S_FMT() - V4L2_REQ_CMD_END()
and to get: V4L2_REQ_CMD_BEGIN() - G_FMT() - V4L2_REQ_CMD_END() | [14:35] |
pinchartl | could I keep the request "open" (avoid the first V4L2_REQ_CMD_END and the second V4L2_REQ_CMD_BEGIN) ? | [14:36] |
hverkuil | Good question, I had to look that up. Yes, that's possible. So BEGIN - S_FMT - APPLY - G_FMT - END. | [14:38] |
pinchartl | that should work
I'll give it a try by the way, if you see changes that could make the protocol better, please feel free to tell me I was thinking about having an operation to configure streams one at a time | [14:38] |
hverkuil | But note that I have not made any support yet for such 'hidden' internal controls. I didn't know what would be needed and I didn't want to start coding when it wasn't even certain when this patch series would be merged. | [14:39] |
pinchartl | but the problem with this approach is that the streams are not fully independent | [14:40] |
hverkuil | I recommend that initially you just make a 'normal' compound control to do the proof-of-concept. | [14:40] |
pinchartl | sure
I can handle the implementation | [14:41] |
hverkuil | http://git.linuxtv.org/cgit.cgi/hverkuil/media_tree.git/log/?h=confstore | [14:42] |
pinchartl | so you think it's better, at the protocol level, to configure all streams in one command than to use one command per stream ? | [14:42] |
hverkuil | That old branch contains code for the selection API.
It's old so I don't know how useful it is. If there are dependencies (i.e. if stream 1 has format X, then stream 2 cannot have format Y), then doing this per stream can be difficult. Having an atomic configure_streams would simplify that. But it comes at a cost. In v4l2 we have similar problems (e.g. first configuring format, then crop, or the other way around). In our case we attempt to get the best match for the last called ioctl. | [14:42] |
pinchartl | that's exactly the conclusion I came to
I was trying to see whether I could expose enough information to make the dependencies available to the host side, but I don't think it can be done cleanly in the general case | [14:45] |
hverkuil | I don't think so either. | [14:46] |
pinchartl | I'll try conf stores for that then
I then have a second question regarding the requests the way it works is that V4L2_REQ_CMD_APPLY() will call to the control framework and result in a s_ctrl call for every control in the store, right ? | [14:47] |
hverkuil | not necessarily.
let me dig into the code, it's been a while... Ah, yes, now I remember. Yes, CMD_APPLY will indeed call the control framework and s_ctrl is called whereever that is needed. This can be changed, of course, but my reasoning was that since APPLY can happen any time there is no need for any specific driver code to ensure synchronization with frames. | [14:48] |
pinchartl | depending on the driver it might be easier to have a central state (conf store) from which information can be pulled instead of pushing individual controls one by one | [14:55] |
hverkuil | CMD_QUEUE *does* go to a driver-provided callback, since there you would expect special code to ensure proper synchronization. | [14:55] |
pinchartl | especially in this case, the driver will need to construct one request blob to be sent to camera from all the controls in the store
ah right I don't need CMD_APPLY here obviously, I need CMD_QUEUE or to be precise I'd use CMD_APPLY for formats and CMD_QUEUE for the requests that makes sense | [14:56] |
hverkuil | it does? | [14:57] |
pinchartl | doesn't it ? :-) | [14:57] |
hverkuil | You're not queuing buffer, you just want to set the formats for all the streams in a single call to the camera, right? | [14:58] |
pinchartl | yes | [14:58] |
hverkuil | And then APPLY is annoying because it goes straight to v4l2_ctrl_apply_request() instead of giving the driver the opportunity to inspect the controls that are applied.
Hmm, actually, APPLY doesn't work since it is per-stream. Wait,it's too long ago :-) | [14:59] |
pinchartl | :-)
as I understand it, APPLY will immediately apply a set of parameters, using s_ctrl() while CMD_QUEUE will queue the store to be applied later more precisely to be applied with the corresponding buffers CMD_QUEUE should thus not call s_ctrl | [15:00] |
hverkuil | True for APPLY. And APPLY is per-stream, i.e. it only applies the request set for that specific stream. | [15:01] |
pinchartl | I'll give all this a try
another thing that bothers me is the metadata that is sent back by the device and to some extent the controls stored in the request too vendors will be allowed to implement custom controls | [15:03] |
hverkuil | CMD_QUEUE is global (i.e. you only call it for a single stream, but it applies to all streams). It is really meant to be used while streaming where you prepare buffers for each stream + and requests, then CMD_QUEUE queues up all those prepared streams and is responsible for setting the new request values at the right time. | [15:04] |
pinchartl | and the format of the metadata will be allowed to be vendor-specific
in which case parsing will be performed by userspace | [15:04] |
hverkuil | What you want is an APPLY_GLOBAL. Just set the requests for all streams, nothing else. | [15:04] |
pinchartl | the kernel driver will receive a metadata buffer and pass it to userspace
using a buffers queue that shouldn't be much of an issue but in the other direction I wonder how to handle vendor-specific controls as they won't be known by the kernel driver | [15:04] |
hverkuil | I.e. an req_apply callback next to the req_queue callback. Easy enough to add. | [15:05] |
pinchartl | right, I'd need some form of global APPLY | [15:06] |
hverkuil | with vendor-specific you mean camera-specific? | [15:07] |
pinchartl | yes
there will be a single kernel driver but multiple cameras, developed by multiple vendors a bit like in the UVC case | [15:08] |
hverkuil | how do you plan to pass these vendor-specific data to the camera? What API is used there?
just a binary blob that is passed to the camera? | [15:09] |
pinchartl | I don't know yet
that would be the easiest, but I dislike it as much as you do :-) | [15:10] |
*** | ocrete has quit IRC (Ping timeout: 250 seconds) | [15:10] |
hverkuil | Worst case you would need to know the size of each vendor 'control' by querying the camera, and then you can create u8 array controls of that size. Those can then be used in requests.
Ideally you get more metadata, but knowing the size is the minimum you need. basically QUERYCTRL :-) | [15:11] |
pinchartl | I'll see what can be done
I'll have to experiment with that and check performances too, as we're talking of about 300 controls per request ok, I need to go I'm afraid, or I'll miss breakfast and the day will be long so I don't want to miss it :-) | [15:12] |
hverkuil | Make sure that you give a good hint when calling v4l2_ctrl_handler_init(). If the number of controls is too far off from the real number, then the hash will be inefficient.
enjoy! bon appetit | [15:14] |
pinchartl | thank you
and thank you for your help | [15:16] |
....... (idle for 30mn) | ||
bparrot | hverkuil, pong, if you press for time, yeah go ahead and update my patch for the value64. I have not looked at the details yet, so i don't know why it is needed. In my testing it worked just fine as is. But my testing was not comprehensive | [15:46] |
hverkuil, oh i see
i'll resend in about 5 mins if that still helps | [15:57] | |
hverkuil | bparrot: yes, that would help! | [15:58] |
bparrot | ok np | [15:58] |
hverkuil, done | [16:03] | |
............................................................ (idle for 4h56mn) | ||
*** | awalls has left | [20:59] |
↑back Search ←Prev date Next date→ Show only urls | (Click on time to select a line by its url) |