kbingham: pinchartl: hi there. are you guys aware of any work towards supporting uvc controls in the linux uvc gadget? mriesch: kernel or userspace application ? pinchartl: well let's begin with the kernel :-) please don't I don't want to see controls in the kernel they should be fully implemented in userspace adding support for formats in the kernel driver was already a bad mistake ah, it was a good idea to discuss this with you beforehand then can you explain to me what happens at the moment when e.g. the zoom control is set on the gadget? is there any infrastructure right now that supports this? there should be, yes when the host sets a control there's a UVC SET_CUR request being sent to the gadget that request is handled by the kernel, and an event is generated the userspace application already has support to handle events it gets all the data from the SET_CUR request in the event and it's up to the application to perform the requested action event in the sense of... v4l2 event? yes pinchartl: ah ok. transparent kernel code. user space determines the control and acts upon them. got it formats used to be handled the same way, but patches got submitted to also handle them on the kernel side, and I failed to block them :-( now it's a mess pinchartl: but handling them in kernel space has the benefit that the individual applications do not need to implement the same magic over and over again, hasn't it? they still have to be handled on the application side when the kernel receives a request from the host to set a format, the application needs to configure the video source accordingly the kernel can't do that the only reason to add format support in the kernel is to make some generic userspace code dealing with V4L2 devices happier but you need a custom application anyway and this isn't a generic V4L2 device it only reuses part of the V4L2 API because it was nicer than creating a fully custom API pinchartl: i see. well, thanks for the explanations :-) I think that the argument was about running the gstreamer v4l2 sink element in parallel to the userspace uvc gadget application, in two different processes pinchartl: going back to the uvc controls: could it be envisaged to use the gstreamer uvc/v4l2 sink for uvc video data and a different process (e.g. based on uvc-gadget) for uvc controls? control should be fully separate from video streaming, yes mriesch, Are you already using https://gitlab.freedesktop.org/camera/uvc-gadget/ ? kbingham: no, gstreamer uvcsink. we do not support controls (yet) is the gstreamer to get encode ? or libcamera support? (Or because there was already a uvcsink support :D) kbingham: there are different paths to uvc, and at least one variant includes a hw scaler and hw encoder. gstreamer is used to tie everything together in each case That makes sense ;-) gstreamer certainly make sense kbingham: pinchartl: what do you think of exposing the uvc controls via gstreamer? mriesch: how so and why ? maybe e.g. libcamerasrc could be connected to uvcsink via gstreamer and the zoom control could be forwarded to libcamerasrc via the gst bus I'm not thrilled you need device-specific logic in-between I'd rather have a good application framework to make it easy to develop a UVC gadget application with minimal effort what's the device specific logic in that instance? it depends on the device :-) think of the pan/tilt controls in UVC for instance. those won't be handled by libcamera at all pinchartl: why not pan/tilt? why would they be implemented by libcamera ? those are slow motors external to the camera pipeline because libcamera instructs the subdevice that represents the slow motors to do so at least that is happening on our devices with mechanical zoom. pan/tilt seems like a logical extension of that zoom can make sense in libcamera pan/tilt doesn't in my opinion depending on the device, there will be needs for device-specific logic so focussing on building pieces that can be connected together is better than trying to shoe-horn everything in a single implementation also, we do digital pan and tilt on the gpu, in addition to digital zoom. it is the same render step (and surely you wouldn't want to separate them) but this maybe better placed in #libcamera that's different we already have a ScalerCrop control well anyway, i can agree that usually the situation is so complex that there is some block in between that handles the controls and instructs the device to do something, via libcamera or a different path... quite a few of the controls will be forwarded to libcamera, directly or after some pre-processing and a sample application built on top of reusable building blocks that would handle the most common controls in a naive pass-through way is a good idea but it's important to have an underlying architecture that makes it possible to build other applications I don't want a mid-layer here, I want reusable components that can be assembled uvc-gadget provides a lib as well, right? so i could go and base on it with my supersecret never-to-be-open-sourced business logic code? I don't want to see patches that will add controls to libcamera for the sole reason that the gstreamer uvcsink generates events on a gst bus. the glue doesn't belong to libcamera uvc-gadget provides a library, yes. that was a design goal lgpl right? I think so pinchartl: cool. gotta run -- thanks for the insights! you're welcome