This is the first draft of the Linux Media Summit 2016 – San Diego on April, 2016
Linux Media Summit 2016 – San Diego
Mauro Carvalho Chehab <email@example.com> (Samsung)
1. CEC Framework Status update
First version of the framework close to completion. Will likely be submitted for Kernel 4.7, although it could be merged only for 4.8.
Independent framework, allows consumers in V4L2, DRM, ALSA, …
Driver support WIP for Pulse Eight USB dongle, omap4, adv7604/7842/7511.
2. Quick demo of the new qdisp utility that is in development
The qdisp utility is a simpler alternative to qv4l2 that handles video capture and show the captured video only.
Currently, qdisp requires OpenGL, OpenGLES support is planned.
Color space and format conversion code is based on GPU shaders. It will be split into a library to be shared with qv4l2. A CPU-based alternative would be feasible but isn’t planned at the moment.
The qdisp code is currently available here: http://git.linuxtv.org/hverkuil/v4l-utils.git/log/?h=cec.
3. Request API Status update
At the moment: Allows to chain multiple of the existing IOCTLs into a request which will either be applied atomicly or not at all
One new IOCTL that takes all state and applies it atomicly (like DRM atomic modesetting)
How to perform atomic operations across subssytems? V4L2, DRM, ALSA, MC
Hans will contact Pawel to see what/when needs to be upstreamed for e.g. the rockchip driver.
4. Stream multiplexing
CSI-2 has up to 4 virtual channels (2 bits), 6 bits for Data Type
Virtual channels do not have to be in sync with one another, so different virtual channels can carry different framerates.
Within a virtual channel each line is tagged with a data type as well, so it can be used to pass metadata + videodata in one virtual channel as well.
Introduce the concept of virtual channels which are routed on the top of the physical links. A virtual channel has a route that goes through multiple physical entities, with routing information at each entity on how the data is forwarded.
Laurent will dig up old router entity code he posted in the past and re-post it or provide a link to that code.
5. DT Bindings for flash & lens controllers
There are drivers that create their MC topology using the device tree information, which works great for entities that transport data, but how to detect entities that don’t transport data such as flash devices, focusers, etc.?
How can those be deduced using the device tree?
Sensor DT node add phandle to focus controller.: add generic v4l binding properties to reference such devices.
6. How to improve the linux-media patch and review process?
Currently, sub-maintainership is not working as expected. Also, we’re currently lacking DVB and RC sub-maintainers.
We also lack reviewers.
Shuah offered help with Media Controller patch reviews
Idea from Daniel Vetter to handle public APIs that might need some tweaking is to have it depend on a debug config option so enabling that api would make the kernel tainted.
Post a RFC asking for volunteers for sub-maintainership for DVB and Remote Controllers at the linux-media ML. In the case of Remote Ccontrollers: we could also post RFC to linux-input.
Mauro: contact Kamil to ask status of codec sub-maintainership. If no time, then Hans can take over.
Push on Intel (Sakari, Guennadi) (perhaps talk to Dirk Hohndel), Samsung (Marek and team), Google (Pawel) to give them time for upstreaming/reviewing.
Next media workshop
Should we organize the next media workshop at ELC-E or LPC/KS ? Let’s try with a quick survey of who plans to go where. As this workshop is held in the US Europe would be good to attract more European developers.
7. Fix broken media_device alloc/remove – Media Device Allocator API
The media module (media.ko) needs to be owner of the media devnode.cdev, and not the first driver that registers it. With the such change,
All drivers that use the media conroller should use the Media Device Allocator API.
8. Media Controller connection Entities
MC currently lacks a way to expose how external sources and outputs (RF, S-Video, composite, etc) are connected.
Mauro explained what are the userspace needs that are not covered by our APIs today and that could benefit from the MC API : basically one of the goals for the MC, back on the 2009 discussions, were to be able to show the device nodes that related and should used together to capture analog TV, digital TV and ALSA, and to be able to prevent the related drivers to stream on unexpected ways. So, when an analog TV connector is in use, the DVB API can’t be enabled and vice-versa. The device nodes issue is unrelated to connectors, but supporting the connectors is needed to be able to prevent using two incompatible paths at the same time. In other words, Analog and digital paths are usually exclusive, and without MC and MC connectors, there’s no way for userspace to know what the constraints are.
Physical x Logical representation
While media cards/boards have physical connectors, the chips managed by the Kernel driver sees a “logical” connection, in the sense that each different input/output is mapped via a pin set with a corresponding register setup.
Due to that, the representation used by V4L2 (VIDIOC_ENUM_INPUT, VIDIOC_G_INPUT, VIDIOC_S_INPUT) is to represent the logical connection.
Another desired feature of represent MC connections is to present information about connectors to the user to make it easier to know where to plug the cables. Such representation is physical connector based.
A connection-based representation in MC would require properties to map them to physical connectors. A connector-based representation in MC would require properties to map them to logical connections. Provided that both mappings are possible, the MC representation could either use logical connections or physical connectors.
This problem is similar to virtual channels over a logical link (like CSI-2, see “4. Stream multiplexing”), in the sense that logical connectors can be thought of as a specific routing of one or more signals from the physical connector through some fabric (e.g. switch, crossbar) to the demodulator.
For RF and Composite, the physical and logical representation are the same, as those connectors have just one analog signal.
S-Video has two signals on it (Y+C). When a S-Video signal is sent to the connector, the physical and logical representation matches. However, some devices allow to use the S-Video connector to send a composite signal, by using a Composite->S-Video cable, but requiring a different setup at the chipset. On such cases, the physical and logical representation is different.
There’s also one case that it was not covered yet: how to handle the cases where one logical connection is mapped via several different connectors? This is common on ALSA, where the stereo input/output can be either a single jack or two RCAs.
Proposal for complex connections
Model physical connectors and support a routing ioctl for the entities that they are connected to. For existing drivers that use S_INPUT, we can either not show the logical connectors at all, or we just show the logical connectors in the absence of knowledge about the actual physical connectors.
How that routing ioctl should look like is unknown, but this might be done the same way as the routing as discussed in the context of CSI-2.
Some subdevs that have complex routing are saa7115, msp34xx and adv7842.
It was decided that, for now, we’ll map via MC only the cases where the physical and logical representation is the same, e. g. RF, Composite and S-Video signals over a S-Video connector, postponing the other cases to where we have a routing ioctl.
9. Pad identification
Currently, a pad is identified by index and whether it is a input or output pad. If the index changes between kernel versions, the userspace ABI breaks.
While we don’t have the properties API, we could use an enum to give a type to the pad. Don’t expose this to userspace yet, as exposing it to userspace requires more discussions, plus the properties API.
Adding pad names exposed to userspace can also be useful. However, there’s a risk of userspace relying on specific string names to identify the pad. So, in order to avoid repeating the same mistake we did with subdev and entity names, the API should define in details what the name should contain and how it should be constructed. What userspace can expect from the name, including information that can be extracted from the name (e.g. can I sscanf(“%u”) to get the pad number ?) also needs to be defined in the API.
Create a pad type enum.
10. Other topics
When adding a new API that is still experimental, the DRM/KMS subsystem covers the API with a configuration option and taints the kernel when the option is selected. This lasts for a few kernel versions until the API stabilizes and is then removed. This could be experimented in V4L2, effectively giving us back CONFIG_EXPERIMENTAL.
Should we avoid reserved fields in new ioctls, or use API versioning (as in DRM/KMS) ? Versioning is a bit more work in userspace, but avoids issues with application that don’t zero reserved fields and break when the API is extended.
Other action items