<!-- Some styling for better description lists --><style type='text/css'>dt { font-weight: bold;float: left;display:inline;margin-right: 1em} dd { display:block; margin-left: 2em}</style> ***: charrus_ has quit IRC (Ping timeout: 480 seconds) <br> ten1572377432463050 has quit IRC (Remote host closed the connection) <br> ten15723774324630500 has joined #linux-media <br> danitool has quit IRC (Ping timeout: 480 seconds) <br> thansen has quit IRC (Quit: The Lounge - https://thelounge.github.io) <br> thansen has joined #linux-media <br> eelstrebor has quit IRC (Quit: Ex-Chat) <br> mvaittin has joined #linux-media <br> mrpops2ko_ has joined #linux-media <br> mrpops2ko has quit IRC (Read error: Connection reset by peer) <br> mvaittin has quit IRC (Ping timeout: 480 seconds) <br> ao2 has joined #linux-media <br> mvaittin has joined #linux-media <br> frieder has joined #linux-media <br> djrscally has joined #linux-media <br> charrus_ has joined #linux-media <br> charrus has quit IRC (Ping timeout: 480 seconds) <br> epoll has quit IRC (Ping timeout: 480 seconds) <br> Moarc has quit IRC (Ping timeout: 480 seconds) <br> charrus has joined #linux-media <br> charrus_ has quit IRC (Ping timeout: 480 seconds) <br> epoll has joined #linux-media <br> Moarc has joined #linux-media <br> charrus_ has joined #linux-media <br> charrus has quit IRC (Ping timeout: 480 seconds) hverkuil: <u>pinchartl</u>: if I remember correctly, the main missing piece is when a request contains data for subdevices: codecs do not have subdevices, so that has never been a problem, but ISPs do. The Request API itself is fine for this AFAIK, but hooking everything up internally is a whole different story. <br> If you want to do that, then I would suggest trying to implement it in vimc first: for one that makes it easier for other people (i.e. me) to test since no hardware is required, and it is probably much easier to prototype with and try different things. <br> I think the Request API can handle multiple video devices, as long as they all belong to the same driver. It's never been tested, but I think it is possible with the current infrastructure (note the 'I think' bit here: it's a long time since I last dug into the details). ***: BrianG61UK has quit IRC (Read error: Connection reset by peer) pinchartl: <u>hverkuil</u>: we won't need subdevs, as M2M ISPs don't use them for anything <br> we would need format support though <br> what of the things I'm wondering is how to handle VIDIOC_STREAMON <br> we issue that on video nodes <br> but not in the context of a request <br> if we have multiple applications using the same device in a time-multiplexed way, how do we handle that ? hverkuil: Format support should be doable. A fair amount of work, but (I think) not necessarily difficult. It will require a uAPI change, so perhaps we should consider picking up the old series again that adds new format ioctls. <br> I don't think streamon is an issue: it just says that the video is ready for streaming, but it is independent of requests AFAICS. <br> Re time multiplexing: I'm not sure what you mean. That sounds more like what an m2m device does, where each open filehandle has its own context. That already works. pinchartl: yes, I need M2M operation with multiple clients, but I can't use the M2M framework as I need multiple video nodes <br> (5 of them) hverkuil: That should not be a problem: the m2m framework just provides helper functions, it is otherwise independent of the Request API. <br> Your driver will have to create the context (just as the m2m framework does for m2m drivers). <br> If multiple video devices have to work together, then there is currently no way to tell that two filehandles are to be in the same context. <br> I.e. with m2m devices when you open the video device a context is created. If another application opens the video device a new context is created and the two have no knowledge about one another. <br> So if a driver creates a video0 and a video1, and you want to have both devices in the same context, then I would not know how to do that today. Currently you either have a global context and only one application can create the requests, or you have a context per file handle whenever it is opened. pinchartl: <u>hverkuil</u>: sorry, I was in a call <br> there's probably something I'm missing <br> with the M2M API, the context is tied to the M2M device file handle hverkuil: <u>pinchartl</u>: I'm in a call now for the next hour. pinchartl: no worries. I'll type my questions and comments, and we can continue the discussion later :-) <br> I'm thinking about tying the context to the MC device file handle <br> that's a central point that supports multiple opens <br> all the ioctls that support the request API will be fine. they take a request ID as a parameter, and a request is created on the MC device, so it can be linked to the context <br> but we have a few issues <br> there are ioctls that don't support the request API <br> among those, important ones are <br> - VIDIOC_G_FMT and VIDIOC_S_FMT <br> that should be fixable. conceptually it's not a big issue <br> - VIDIOC_REQBUFS, VIDIOC_CREATE_BUFS and VIDIOC_REMOVE_BUFS <br> that's more problematic, as those don't operate on a request <br> we'll need to create a vb2 queue per context for each video node <br> and figure out a way to link to the context for those ioctls <br> - VIDIOC_STREAMON and VIDIOC_STREAMOFF <br> same thing here, those ioctls don't operate on a request <br> I'm wondering if we should extend them with a context_fd parameter, which would be the fd of the MC device related to the context ***: hansg has joined #linux-media <br> hansg has quit IRC () ndufresne: <u>pinchartl</u>: I feel a slight mix and match between context and request <br> I'd like to learn more on why you need to queued a buffer allocation as part of a request? pinchartl: I don't :-) <br> but I need to allocate buffers for a context <br> for V4L2 M2M devices, you have a single video device, so the video device fh is the context <br> ioctls being called on a file handle, the kernel knows which context to operate on <br> here we have multiple video devices that all need to be bound together in a context <br> we can create the context on the MC device, binding it to the MC file handle <br> but then, when calling e.g. VIDIOC_REQBUFS, how does the kernel know on which context to operate ? <br> the video device file handle that VIDIOC_REQBUFS is called on is not tied to the context created on the MC device <br> for ioctls that that a request fd parameter, it's easy <br> requests are created by an ioctl call on the MC device <br> so requests are bound to a context <br> having the request fd in the ioctl means we know which context it belongs to <br> for ioctls that don't operate on requests (buffer allocation, streamon/off, ...) we don't get a request <br> so we need to get the context in a different way <br> hence my idea of extending those ioctls with a context_fd, which would be the MC fd <br> is this clearer ? ndufresne: Yeah, but I don't like that approach <br> a request is meant to be used to queue a combination of data and settings toward a operations that will complete in the future pinchartl: yes, and we want to use that ndufresne: Within a single context, but it's not identifying a context pinchartl: to queue the ISP input buffer, the ISP parameters buffer, the ISP statistics buffer and the ISP output buffers all together <br> as far as I understand, the request API is meant for that purpose <br> it also happens that using requests will also provide us with access to contexts for some ioctls, which is just an added bonus ndufresne: What I had in mind is that you could have an API to create context in MC, the initial your video node FD is context less, and you bind it with this new api pinchartl: my question to Hans is related to how to handle multiple contexts for the ioctls that don't deal with requests <br> an ioctl on the video device to bind it to an MC context would make sense I think ndufresne: Then fds knows their context and you don't have to extend APIs to request pinchartl: I would still like to use the request API, to queue an operation on the ISP with all the buffers though, but it then becomes an entirely disconnected problem <br> but with such a new context bind ioctl, I wouldn't need to extend individual ioctls <br> I would still need to extend VIDIOC_G_FMT and VIDIOC_S_FMT with request support, but that's a separate question ndufresne: Could also be some open_for call, not sure of the proper form <br> Btw, got some similar use case with Hantro decoders, they can output up to 4 different resolution, what you suggest would fit too pinchartl: sounds like we have something to experiment with :-) ndufresne: We also have multi-input blitters which has been reduced to CSC and scaling ***: Epakai has quit IRC (Quit: ZNC 1.8.2+deb3.1 - https://znc.in) <br> Epakai has joined #linux-media ndufresne: I'd really like these to stay m2m/multicontext, for multiplexing reasons pinchartl: the Renesas VSP1 is a multi-input blitter. we have a V4L2 driver for it, with multiple video nodes. it's already possible today, but with a single userspace client <br> having support for contexts on the MC device sounds like the right solution ndufresne: Yeah, for me a single userspace client is a no go, it's not usable on desktop <br> I believe scheduling should be done kernel side <br> Raise another challenge, if you don't have a fixed number of inputs/output you probably need an MC state per context pinchartl: yes, scheduling would be done on the kernel side, I agree <br> yes, there would be one MC state per context ***: lexano has joined #linux-media <br> mripard has joined #linux-media ndufresne: I mean, sound like a rough plan, and I feel not everything needs to happen in one go hverkuil: Adding a context bind ioctl to the MC would make sense, or at least certainly something to explore. pinchartl: if both of you think it's worth exploring, I think we have a plan :-) <br> <u>sailus</u>: ^^ <br> what do you think ? hverkuil: For the buffer allocation/removal I think it is possible to add a request_fd. VIDIOC_REMOVE_BUFS has enough reserved fields for that, same for CREATE_BUFS. REQBUFS is really full, I would probably not add request support to that ioctl. <br> I don't see why streamoff/streamon need changes. All these do is to indicate that the application want to start or stop streaming, but that's independent of requests. pinchartl: CREATE_BUFS doesn't operate on a request, but on a context <br> I don't think adding a request_fd to them would make sense <br> same for STREAMON/STREAMOFF. we need to operate on contexts when we have multiple clients <br> <u>jmondi</u>: ^^ what do you think ? ***: mvaittin has quit IRC (Ping timeout: 480 seconds) sailus: <u>pinchartl</u>: Request have been used over multiple video nodes in the past, that should just work. <br> I'm not saying it's nice to implement in drivers but that's another matter. <br> Videobuf2 really isn't meant for that. pinchartl: it won't, because we have a need for per-context ioctls that are not request-specific <br> VIDIOC_STREAMON or VIDIOC_CREATE_BUFS need to operate on a context <br> but they don't operate on a request <br> so requests are not enough sailus: I'm not sure whether creating contexts on MC (file handles) can be meaningfully combined with the use of non-MC nodes. pinchartl: do you mean that it should then operate solely on the MC device, ditching the video and subdev nodes completely ? <br> that's a major change, API-wise sailus: <u>pinchartl</u>: Yes. It'd be a massive change, of course. pinchartl: and that's the opposite of what we've discussed on the list sailus: Not really. I meant using the request API, as-is, without adding contexts. pinchartl: what does that bring us ? sailus: Requests? pinchartl: the discussion was about time-multiplexing the ISP between multiple applications. what do requests alone bring us for that ? sailus: I think there may have been a communication issue here. <br> I meant multiple streams, but that does not equate to multiple contexts (or application). jmondi: <u>pinchartl</u>: I'll read the backlog pinchartl: we already have a solution for multiple streams <br> what you pushed back against on the list was the driver implementation of multi-context operation hverkuil: <u>pinchartl</u>: ah, you are right. The request API doesn't have a context at the moment, or more precisely, it has either a global context, or a per-fh context (m2m). What you are looking for is something in between. pinchartl: M2M devices have a device-wide context, because they have a single video device, so the video device context is device-wide <br> but for devices that have multiple video devices, that's not achievable today <br> so the video device contexts need to be bound to the global context at the MC level one way or another <br> I think a bind ioctl is a good solution hverkuil: No, whenever you open a file handle for an m2m device the context is created. It is independent of any other open file handles of the same video device. E.g. see vim2m_open in vim2m.c. It creates a vim2m_ctx context whenever a new fh is opened. pinchartl: yes <br> what I mean is <br> because your device has a single video device node <br> the context created when you open the video device node is device-wide <br> it covers the whole hardware device <br> as with ISPs, because we have multiple video device nodes, we don't have that anymore <br> any context we would create when opening a video device node would be disconnected from the context we would create when the same application opens another video device node <br> that's why we need a different mechanism to create a device-wide context in that case <br> creating a context within the kernel when opening the MC device node is easy <br> and we just need one new ioctl to bind a v4l2_fh to an MC context <br> with that, we can bind all the pieces together jmondi: so that from any IOCTL on the video device we can get back to the MC context after they have been bound together <br> this seems nice pinchartl: correct jmondi: I'm now not 100% sure I understand why we need the Request API then <br> but that's me not knowing enough of the Request API probably hverkuil: You still need to make the requests containing the data and buffers so they can be queued. Each request when applied by the driver will be applied to the context (i.e. updating it). Right now that context is implicitly provided by the driver, but this proposal makes it an object in its own right. <br> At least, that's how I understand it :-) jmondi: yeah, when it comes to the context I think I get it :) pinchartl: I don't think we need the request API more than we do today with a single context <br> but it would still be good to use it <br> as it binds together buffers <br> and could solve the issue of how to handle optional streams <br> but it's orthogonal to the contexts jmondi: "I don't think we need the request API more than we do today with a single context" ack, this was my understanding as well ;) <br> (or with multiple contexts realized by duplicating the media graph, for what it matters) pinchartl: so we need to introduce a media_device_state <br> that will store link state <br> and an ioctl on video device nodes (and subdevs too later) to bind to an MC context hverkuil: state == context? The terminology is a bit confusing. pinchartl: we can bikeshed the names. we have v4l2_subdev_state, I was thinking about media_device_state to match that <br> but I don't care too much about the names <br> context is possibly better <br> we'll see what makes sense when prototyping sailus: I'll be back tomorrow if not later this evening. I'll read the discussion later on. ***: Moarc has quit IRC (Remote host closed the connection) <br> Moarc has joined #linux-media <br> charrus_ has quit IRC (Quit: https://quassel-irc.org - Chat comfortably. Anywhere.) <br> charrus has joined #linux-media <br> hansg has joined #linux-media <br> frieder has quit IRC (Remote host closed the connection) <br> AndyCap has quit IRC (Ping timeout: 480 seconds) <br> hansg has quit IRC (Read error: No route to host) <br> hansg has joined #linux-media sailus: <u>pinchartl</u>: Binding video nodefile handles to Media device context (file handle) should be a workable approach. <br> Co-incidentally I had continued working on the Media device lifetime management set. Co-incidentally it brings file handles to MC. But for contexts we'll need it, too. ***: jarthur has joined #linux-media <br> golubevan has quit IRC (Quit: WeeChat 1.6) pinchartl: <u>sailus</u>: do you have a patch to introduce MC file handles ? ***: charrus_ has joined #linux-media <br> charrus has quit IRC (Ping timeout: 480 seconds) <br> hansg has quit IRC (Quit: Leaving) <br> charrus has joined #linux-media <br> charrus_ has quit IRC (Ping timeout: 480 seconds) <br> ten15723774324630500 has quit IRC (Remote host closed the connection) <br> ten1572377432463050 has joined #linux-media <br> mrpops2ko_ has quit IRC (Remote host closed the connection) <br> mrpops2ko has joined #linux-media sailus: <u>pinchartl</u>: This should work (only rebased, not tested): https://git.linuxtv.org/sailus/media_tree.git/log/?h=media-ref pinchartl: thank you sailus: <u>pincharl</u>: There seem to be minor compilation issues. I'll sort those out tomorrow. <br> Well, ok, it was just two drivers that begun using media_devnode_is_registered(). I've fixed them, it should all compile now. More tomorrow... pinchartl: :-) ***: djrscally has quit IRC (Quit: Konversation terminated!) <br> djrscally has joined #linux-media <br> ten1572377432463050 has quit IRC (Remote host closed the connection) <br> ten1572377432463050 has joined #linux-media <br> djrscally has quit IRC (Quit: Konversation terminated!) <br> djrscally has joined #linux-media <br> ao2 has quit IRC (Quit: Leaving) <br> djrscally has quit IRC (Ping timeout: 480 seconds) <br> djrscally has joined #linux-media <br> charrus has quit IRC (Quit: https://quassel-irc.org - Chat comfortably. Anywhere.) <br> charrus has joined #linux-media <br> darkapex has quit IRC (Remote host closed the connection) <br> darkapex has joined #linux-media <br> ten1572377432463050 has quit IRC (Remote host closed the connection) <br> ten1572377432463050 has joined #linux-media ndufresne: I see, jmondi pinchartl, so for the short term, request are orthogonal to the most pressing issue, but I'd be happy to help design use of request with m2m ISP <br> one of the race you cannot solve without issue is for operation that required multiple inputs (on multiple input queues), but where the secondary input may be optional, the request queue operation disambiguate this <br> in practice, we could make existing stateless codecs work without them, its something I discovered much later, but I would not want to though, it is much nicer to program pinchartl: yes, when some of the inputs or outputs are optional, requests would help ndufresne: and when we introduce multiple video nodes, you can use the request to wait for all the work to be completed, instead of having to poll every queues <br> option "CAPTURE" might be interesting of course, since the next request might be ready by the time to DQBUF on, so you still need to check against the sequence and reserve the buffer for later ... <br> *optional