2019-05-24   Edinburgh Media Summit 2018 meeting report - Oct, 25

Here’s the report on the Media Summit held on 25th October in Edinburgh. The report is followed by the stateless codec discussion two days earlier.

Most presenters used slides which are available here:


  • Brad Love
  • Ezequiel Garcia
  • Gustavo Padovan
  • Hans Verkuil
  • Helen Koike
  • Hidenori Yamaji
  • Ivan Kalinin
  • Jacopo Mondi
  • Kieran Bingham
  • Laurent Pinchart
  • Mauro Chebab
  • Maxime Ripard
  • Michael Grzeschik
  • Michael Ira Krufky
  • Niklas Söderlund
  • Patrick Lai
  • Paul Elder
  • Peter Griffin
  • Ralph Clark
  • Ricardo Ribalda
  • Sakari Ailus
  • Sean Young
  • Seung-Woo Kim
  • Stefan Klug
  • Vinod Koul

CEC status - Hans Verkuil

Hans prensented an update on CEC status.

  • cec-gpio error injection support

  • tda998x (including BeagleBoard Bone support after gpiolib changes)

  • ChromeOS EC CEC

  • In progress: omap5/dra7xx/am57xx TI (waiting for DSS redesign to land)

  • In progress: SECO cec driver (for UDOO x86 boards, expected for 4.21)

  • DisplayPort CEC-Tunneling-over-AUX for i915, nouveau, amdgpu

  • MegaChips 2900 chipset based adapters seems to support this protocol very well

  • Continuing work on CEC utilities, esp. the compliance test: it is in continuous use at Cisco.

The current CEC status is available here.

Slides are available here

rc-core status report - Sean Young

In the last year all staging lirc drivers have been either removed or ported to rc-core. Decoding of the more obscure IR protocols and protocol variants can now be done with BPF, with support in both the kernel and ir-keytable (which is in v4l-utils). Generally we’re in a good situation wrt IR support.

There is some more ancient hardware (serial or usb-serial) that does not have support but not sure if anyone cares. kernel-doc is a little sparse and does not cover BPF IR decoding, so that needs improving. There was a discussion on enabling builds with CONFIG_RC_CORE=n. Sean suggested we could have rc_allocate_driver() return NULL and have the drivers deal with this gracefully, i.e. their probe functions should continue without IR. Mauro said there should be a per-driver config option (as is done for saa7134 for example).

No conclusion was reached on this.

Persistent storage of controls - Ricardo Ribalda

Ricardo gave a presentation on a proposed solution for using the V4L2 control framework as an interface for updating control value defaults on sensor EEPROM.

Sensors commonly come with device specific tuning information that’s embedded in the device EEPROM. Whereas this is also very common for raw cameras on mobile devices, the discussion this time was concentrated on industrial cameras.

The EEPROM contents may be written by the sensor vendor but occasionally may need to be updated by customers. Setting the control default value was suggested as the exact mechanism to do this.

The proposal was to use controls as the interface to update sensor tuning information in the EEPROM.

There were arguments for and against the approach:

  • Drivers usually get these things right: relying on an user space program to do this is an additional dependency.
  • Re-use of an existing interface (root priviledge check may be added).
  • Partial solution only: EEPROM contents may need to be updated for other reasons as well, and a “spotty” implementation for updating certain EEPROM locations seems very use case specific.
  • Changes required to the control framework for this — defaults are not settable at the moment.
  • The need is very use case specific, and adding support for that in a generic framework does not seem to fit very well.

The general consensus appears to be not to change the control framework this way, but to continue to update the EEPROM using a specific user space program.

Tooling for sub-system tree maintenance - Laurent Pinchart

Laurent talked about the DRM tree maintenance model.

The DRM tree has switched to co-maintainer model. This has made it possible to share the burden of tree maintenance, removing bottlenecks they’ve had.

The larger number of people having (and using) their commit rights has created the need for a more strict rules for the tree maintenance, and subsequently a tool to implement it. It’s called “DIM”, the DRM Inglorious Maintenance tool. This is a command line tool that works as a front-end to execute the workflow: drm inglorious maintainer script - dim

In particular what’s worth noting:

  • The conflicts are resolved by the committer, not by the tree maintainer.

  • DIM stores conflict resolutions (as resolved by developers) to a shared cache.

  • DIM makes doing common mistakes harder by using sanity checks.

There are about 50 people who currently have commit rights to the DRM tree. There are no reports of commit rights having been forcibly removed as of yet. This strongly suggests that the model is workable.

The use of the tool puts additional responsibilities as well as some burden to the committers. Before the patches may be pushed, they are first compiled on developer’s machine. That requires time, and without special arrangements such as having a second local workspace, and that time is away from productive work.

The discussion that followed was concentrated on the possibility of using a similar model for the media tree. While the suggestion was initially met by mostly favourable reception, there were concerns as well.

V4L2 was maintained generally according to the suggested model — albeit without the proposed tools or process that needed to be strictly followed. There was once an incident which involved merging around 9000 lines of unreviewed code in a lot of places. What followed was not pretty, and this eventually lead to loss of multiple developers.

Could this happen again? The DRM tree has not suffered such incidents, and generally it understood such incident could be addressed by simply reverting such a patch and removing commit rights if necessary. (Editor note: we have reverted the media tree master state to an earlier commit many times for various reasons. Version control was based on Mercurial back then and just reverting a single patch wasn’t that easy back then?)

Some opined that we do not have a bottleneck in reviewing patches and getting them merged whilst others thought this was not the case. It is certainly true that a very large number of patches (around 500 in the last kernel release) went in through the media tree. It still appears that there would be more patches and more drivers to get in if the throughput was higher.

It was also agreed to try to improve tooling at to streamline the workflow. (Ed. note: also see testing related topics below.)

Current status of testing on the media tree - Sakari

The common practice in media subsystem development is that developers do test their patches before submitting them. This is an unwritten rule: sometimes patches end up not being tested after making slight changes to them, or they have been tested on a different kernel version. The developer may also simply forget to test the patch.

Besides this, it is not uncommon that changing the kernel configuration or switching to a different architecture will cause a compilation warning or an error.

The 0-day bot will catch some of these errors before the patches are merged, but that testing does not fully cover all the possible cases. There are some common pain points in V4L2-related Kconfig options (plain V4L2, MC or MC + subdev uAPI); newly submitted drivers may in fact require one of these, but the developer may not have realised that and so this ends up not being taken into account in Kconfig.

Once the review is done, and after being applied to the sub-maintainer tree, a patch is applied to Mauro’s local tree and Mauro performs additional tests on it. These tests currently prevent a fair number of problems reaching a wider audience than the media developers.

On the other hand, whenever an issue is found, the patch will have to be fixed by the sub-maintainer or the developer. This is hardly ideal, as the problem has existed usually for a month or two before being spotted — by a program. These checks should be instead performed on the patch when it’s submitted.

Automated testing - Ezequiel Garcia

Ideal Continuous Integration process consists of the following steps:

  1. patch submission
  2. review and approval
  3. merge

The core question is “what level of quality standards do we want to enforce”. The maintenance process should be modelled around this question, and not the other way around. Automated testing can be a part of enforcing the quality standards.

There are three steps:

  1. Define the quality standard
  2. Define how to quantify quality in respect to the standard
  3. Define how to enforce the standards

On the tooling side, an uAPI test tool exists. It’s called v4l2-compliance, and new drivers are required to pass the v4l2-compliance test. It has quite a few favourable properties:

  • Complete in terms of the uAPI coverage
  • Quick and easy to run
  • Nice output format for humans & scripts

There are some issues as well:

  • No codec support (stateful or stateless)
  • No SDR or touch support
  • Frequently updated (distribution shipped v4l2-compliance useless)
  • Only one contributor

Ezequiel noted that some people think that v4l2-compliance is changing too often but Hans responded that this is a necessity. The API gets amended occasionally and the existing API gets new tests. Mauro proposed moving v4l2-compliance to the kernel source tree but Hans preferred keeping it separate. That way it’s easier to develop it.

To address the problem of only a single contributor, it was suggested that people implementing new APIs would need to provide the tests for v4l2-compliance as well. To achieve this, the v4l2-compliance codebase needs some cleanup to make it easier to contribute. The codebase is larger and there is no documentation.

V4l2-compliance also covers MC, V4L2 and V4L2 sub-device uAPIs.

DVB will require its own test tooling; it is not covered by v4l2-compliance. In order to facilitate automated testing, a virtual DVB driver would be useful as well. The task was added to the list of projects needing volunteers:

Media Open Source Projects: Looking for Volunteers

There are some other test tools that could cover V4L2 but at the moment it seems somewhat far-fetched any of them would be used to test V4L2 in the near future:

KernelCI is a test automation system that supports automated compile and boot testing. As a newly added feature, additional tests may be implemented. This is what Collabora has implemented, effectively the current demo system runs v4l2-compliance on virtual drivers in a virtual machines (LAVA slaves).

A sample of the current test report is here:

RFC: kernelCI media subsystem pilot

The established way to run KernelCI tests is off the head of the branches of the stable and development kernel trees, including linux-next. This is not useful as such to support automated testing of patches for the media tree: the patches need to be tested before they are merged, not after merging.

In the discusion that followed among a slightly smaller group of people, it was suggested that tests could be run from select developer kernel trees, from any branch. If a developer needs long-term storage, (s)he could have another tree which would not be subject automated test builds. Alternatively, the branch name could be used as a basis for triggering an automated build, but this could end up being too restrictive.

Merging the next rc1 by the maintainer would be no special case: the branch would be tested in similar way than the developer branches containing patches, and tests should need to pass before pushing the content to the media tree master branch.

Ezequiel wished that people would reply to his e-mail to express their wishes on the testing needs (see sample report above).

Stateless codecs - Hans Verkuil

Support for stateless codecs will be merged for v4.20 with an Allwinner staging codec driver.

The earlier stateless codec discussion ended up concluding that the bitstream parsing is application specific, so there will be no need for a generic implementation that was previously foreseen. The question that remains is: should there be a simple parser for compliance testing?

All main applications support libva which was developed as the codec API to be used with Intel GPUs. A libVA frontend was written to support the Cedrus stateless V4L2 decoder driver. It remains to be seen whether the same implementation could be used as such for the other stateless codec drivers or whether changes, or in the worst case a parallel implementation, would be needed.

Slides are here

New versions of the old IOCTLs - Hans Verkuil

V4L2 is an old API with shifting focus in terms of functionality and hardware supported. While there has been lots of changes to the two during the existence of V4L2, some of the API is unchanged since the old times. While the API is usable for the purpose, it is needlessly clunky: it is often not obvious how an IOCTL is related to the task at hand (such as using S_PARM to set the frame interval) or the API does not use year 2038-safe timestamps (struct v4l2_buffer). These APIs deserve to be updated.


In the case of VIDIOC_G_PARM and VIDIOC_S_PARM, the IOCTLs are only used to set and get the frame interval. In this case, what can be done, is to add a new IOCTL definition, with the same IOCTL number and with binary-equivalent IOCTL argument struct that only contains the field for the frame rate itself. This is binary-compatible with the existing code and no compatibility code will be needed. The new IOCTLs will be called VIDIOC_G_FRAME_INTERVAL and VIDIOC_S_FRAME_INTERVAL.


Besides discrete set of supported frame intervals, VIDIOC_ENUM_FRAME_INTERVALS has stepwise frame interval as well. Stepwise could be removed as the Qualcomm venus codec and uvc (100 ns units) are the only users. Additionally, the buffer type should be added to struct v4l2_frmivalenum.

There was also a discussion related to enumerating frame intervals in units of ns vs. fractional seconds. The reasoning using a fraction is that this way the frame interval on many standards can be conveyed precisely. Somebody recalled “flick”, that is is the common denominator of the frame rates on all TV standards. Drivers could simply move to use the flick as the denominator, to make frame interval reporting uniform across the drivers.

  • struct v4l2_buffer

struct v4l2_buffer is an age-old struct. There are a few issues in it:

  • The timestamp is not 2038-safe.
  • The multi-plane implementation is a mess.
  • Differing implementation for the end single-plane and multi-plane APIs is confusing for both applications and drivers.

The proposal is to create a new v4l2_buffer struct. The differences to the old one would be:

  • __u64 timestamps. These are 2038-safe. The timestamp source is maintained, i.e. the type remains CLOCK_MONOTONIC apart from certain drivers (e.g. UVC) that lets the user choose the timestamp.
  • Put the planes right to struct v4l2_buffer. The plane struct would also be changed; the new plane struct would be called v4l2_ext_plane.
  • While at it, the plane description can be improved:
    • The start of data from the beginning of the plane memory.
    • Add width and height to the buffer? This would make image size changes easier for the codec. (Ed. note: pixel format as well. But this approach could only partially support what the request API is for.)
  • Unify single- and multi-planar APIs.

The new struct could be called v4l2_ext_buffer.

As the new IOCTL argument struct will have different syntax as well as semantics, it deserves to be named differently. Compatibility code will be needed to convert the users of the old IOCTLs to the new struct used internally by the kernel and drivers, and then back to the user.

  • struct v4l2_create_buffers

Of the format, only the pix.fmt.sizeimage field is effectively used by the drivers supporting VIDIOC_CREATE_BUFS. This could be simplified, by just providing the desired buffer size instead of the entire v4l2_format struct. The user would be instructed to use TRY_FMT to obtain that buffer size.

The need to delete buffers seems to have eventually surfaced. That was expected, but it wasn’t known when this would happen. As the buffer index range would become non-contiguous, it should be possible to create buffers one by one only, as otherwise the indices of the additional buffers would no longer be communicated to the user unambiguously.

So there would be new IOCTLs:

  • VIDIOC_CREATE_BUF - Create a single buffer of given size (plus other non-format related aspects)
  • VIDIOC_DELETE_BUF - Delete a single buffer
  • VIDIOC_DELETE_ALL_BUFS - Delete all buffers

The naming still requires some work. The opposite of create is “destroy”, not “delete”.

  • struct v4l2_pix_format vs. struct v4l2_pix_format_mplane

Working with the two structs depending on whether the format is multi-planar or not is painful. While we’re doing changes in the area, the two could be unified as well. (Editor note: this could be still orthogonal to the buffers, so it could be done separately as well. We’ll see.)

Slides here

Fault tolerant V4L2 - Kieran Bingham

Kieran presented a system where the media hardware complex consisted of eight more or less independent camera sensors that naturally end up being within a single media device.

The current implementation, as well as the API, necessitates that all devices in a media device probe successfully before the entire media device is exposed to the user. Otherwise the user would see with a partial view of the device, without the knowledge it is such.

To address the problem, additional information need to be provided to the user space. In particular:

  • Events on the media device to tell the graph has changed.

  • Graph version number is incremented at graph change (already implemented).

  • The property API could be applicable — placeholders for entities that have not yet appeared?

    • Alternative: known entities that have failed to probe created in the media graph and marked “disable” or “failed”.
  • Query the state of media graph completeness.

That way, even when the devices in a media controller device appear one by one, the user space will be able to have all the necessary information on the registration state of the device.

Complex cameras - Mauro Chehab

Some new laptops integrate a raw Bayer camera + ISP instead of a USB webcam. This is expected to increase, as the solution is generally cheaper and results in better quality images — as long as all the pieces of the puzzle are in place, including the proprietary 3A library.

Still, such devices need to be supported.

(Ed. note: there were two talks related to this topic given in the ELc-E.)

Complex Cameras on Linux

Why Embedded Cameras are Difficult, and How to Make Them Easy

Development process - All

Topic-wise this is continuation of the “Tooling for sub-system tree maintenance”, “Current status of testing on the media tree” and “Automated testing” topics above.

The question here is whether there’s something that could be improved in the media development process and if so, how could that be done.

What came up was a suggestion to have multi-committer tree in a similar manner as the DRM developers do. This was seen to be more interesting for developers than simply being asked to review patches.

It certainly does raise the need for more precise rules for what may be committed to the multi-committer tree, when etc.

It was also requested that experienced driver maintainers would send pull requests on patches to their drivers instead of going through a sub-maintainer (pre-agreed with the relevant (sub)maintainer). This would take some work away from sub-maintainers, but not the maintainer.

No firm decisions were reached in this topic. Perhaps this could be tried out?

There was also a request to document the sub-maintainer names in the wiki so that it’d be easier for people to figure out who to ping if their patches do not get merged. hosting - All

Mauro noted that is currently hosted in a virtual machine somewhere in a German university. The administrator of the virtual machine has not been involved with Video4Linux for some time but has been kind to provide us the hosting over the years.

It has been recognised that there is a need to find a new hosting location for the virtual machine. There is also a question of the domain name Discussion followed.

What could be agreed on rather immediately was that the domain name should be owned by “us”. “Us” is not a legal entity at the moment, and a practical arrangement to achieve that could be to find a new association to own the domain name.

Mauro will discuss with LF to find out what they can offer. Concerns were expressed over other organisations providing us with hosting we are not in charge of ourselves. Other options related to domain ownership and hosting will be researched as well.

No decisions were reached on the topic.


Privacy Policy