prabhakarlad: ping
hverkuil: pong
prabhakarlad: you were right: the am437 code isn't quite correct. The new 'which' field needs to be set to ACTIVE.
Otherwise a NULL pointer isn't allowed.
I'll resent.
hverkuil: I felt so because I had 2 patches for am437x while testing ur subdev cfg patches, and I didnt had them handy yesterday
lyakh: I've no idea
we need to ask LF about that
mchehab: ic, I submitted a request to them via their online form, no idea where that lands eventually
pinchartl, hverkuil, sailus: can we start our MC discussions
?
sure
mchehab: Sakari just went to lunch
I was thinking about doing the same
how about in 45 minutes ?
OK
pinchartl: please ping me when you and sailus will be ready
sure
snawrocki: ping
mchehab: pong
(waiting for Sakari)
sailus: please ping when you're ready
pinchartl: you're maintaining the vsp1 driver right? an entry at MAINTAINERS for it is missing
I'll fix that
ok. I'll delegate you a few vsp1 patches at patchwork
v4l: vsp1: Fix VI6_WPF_SZCLIP_SIZE_MASK macro
mchehab: what would you think about adding a "MEDIA DRIVERS FOR RENESAS" entry in MAINTAINERS ?
DRM has something similar
it groups all DRM drivers together
we don't do that for media driver, but it could be a good idea to start
IMO, the name doesn't matter much, as for each new driver you'll need to touch MAINTAINERS
to include the new files there
Hello!
yes, but my point was that grouping entries per subsystem seems nice
hverkuil: still here?
y
pinchartl: "reneseas" is not a subsystem
media is :-)
VSP1 MEDIA DRIVERS ?
or MEDIA DRIVERS FOR VSP1
I was thinking about MEDIA DRIVERS FOR RENESAS - VSP1
works for me
and yes, it makes sense to rename our entries to start with "MEDIA DRIVERS"
yet, I suspect that Linus may not like such entries renaming patch series, as it is likely to cause all sort of merge conflicts
we can start doing it for new entries
OK
hverkuil, pinchartl, sailus: let's start our discussions for MC
the point where we stopped on our discussions is how to represent the device control entities
as pinchartl said, the original idea is that a V4L2 entity would actually represent the DMA hardware used to do I/O to the CPU
if I can quickly summarize my point
sure
I think the base MC design is to describe the hardware, or at least a logical view of the hardware that is close to it
entities in a graph should thus have a hardware counterpart, or logical hardware counterpart
to give you an example of what I mean by logical hardware counterpart
UVC exposes the device topology through USB descriptors
and the uvcvideo driver in turn exposes that through MC
the entities exposed in UVC devices might not map to one IP core each, but they are defined by the logical functionality from a hardware device point of view
so that's what I think we should aim for in MC
how those entities are handled by the kernel is of course important
information regarding how to access and control the entities is needed
but I don't think that should be exposed as MC entities
it should instead be exposed as property of the MC entities
"property" here has a wide meaning
it can mean a field in struct media_entity_desc
a property exposed through a new property reporting ioctl
or even have a different meaning
well... let's for now stick with the current status, without the DVB patches...
but from a conceptual point of view, I think that entities should map to the hardware device, and control should be reported as entity properties
on a radio device, the V4L2 devnode does not mean DMA
still, the V4L2 core would map it at the pipeline
we don't have radio support in MC yet :-)
I think there is a very good case to be made for software entities
(radio devices may or may not have DMA... some just have a connector to the radio)
s/to the radio/to the audio output/
hverkuil: Do you have examples of such?
the boundary between hw/ip cores/sw is getting pretty vague, and it makes sense to support sw entities.
that could make sense
Examples are radio, but IMHO also frontend.
if you think about vivid for instance
all entities will be software entities
and that's not really a problem for me
The frontend is a sw entity that controls several hw entities (tuner, demod, etc)
I think entities should represent part of the device
There could be virtual devices, but that's mostly a workaround IMHO, albeit still probably a necessary one.
"device" meaning the hardware/firmware/software device
but not the devnode
I don't think a pipeline still should be used to represent data processing in software, at least not in general case.
I'd be happy to leave the software topic aside for the time being.
sailus: well, it is relevant in the DVB frontend case.
MC exposes the topology of a model of the device
in the DVB case, frontend is a software abstraction, right ?
with regards to showing who controls what: I think having control links between control pads would work.
control links and pads ?
sailus: at DVB side, the DVB core implements demux on software, for hardware that doesn't have it
Right.
yes, but a well defined one (part of the dvb core), with an associated device node. Hence my feeling that that makes it a good fit for a sw entity.
in the DVB case, frontend is a part of the hardware
I don't want to represent devnodes as entities, no
mchehab: I agree that could make sense since it has well defined functionality.
the frontend can be a single component or a group of components
it would link to hw entities, allowing the MC to decide which tuner/demod/whatever is controlled by the frontend.
we can decide that representing the frontend as an entity makes sense because it's a good abstraction model
but *not* because it has a devnode
the devnode is a property
But arbitrary programs, I don't think so.
*not* a reason to represent something as an entity
pinchartl: at DVB side, all devnodes are associated with hardware components
that's good :-)
except for demux, with could be a software emulation of a hardware component, for cheap hardware
to put it another way
I don't want the reasoning behind the MC model to be
"there's a devnode here, let's make an entity"
I want it to be
"there's a component here, let's make an entity, and it is controlled by a devnode, let's report that"
it's a very fundamental difference in my opinion
pinchartl: if we would do this way, we would need to redesign the entire MC...
but what if multiple components are controlled by the same devnode?
and regarding kernel-based software demux, I'm not against reporting it as an entity
because it's a component in the DVB pipeline
for example, on a typical analog TV board, *all* components are associated with the same videonode
hverkuil: that's not a problem. you can report that the same devnode controls multiple entities
s/videonode/video devnode/
mchehab: that's a different case
it isn't
in current non-MC devices
the same video devnode controls the whole chain
but indirectly, through the bridge driver
VIDIOC_[G|S]_TUNER ioctls control the tuner entity
there's no direct control on subdev possible through the video devnode, it's indirect
for that reason I wouldn't report that the tuner is controlled by the video devnode
but it is controlled by the video devnode
no, it's controlled by the bridge driver
which is controlled by the video devnode
reporting the same devnode for several components will make a way harder for the software, IMHO
mchehab: I agree with that.
why so ?
Shouldn't the tuner have a separate sub-device node instead?
no, the bridge driver doesn't control it. It only provides a PCI(or USB) bridge to I2C
on almost all cases
The MC is currently focused on data streams, not control.
Existing applications could use it through libv4l.
I want to keep MC focused on data streams
hverkuil: yes, but pinchartl's proposal, if I understood well, is to use an entity property to tell what devnode controls each component
the pipeline should represent data streams
and MC should report how to control pipeline entities
mchehab: no, the bridge driver really controls the i2c devices. Often the translation between the userspace ioctl and the subdev ops is very thin, but it is there
if we're willing to do something like that, we'll need to report that the tuner is controlled by /dev/video0
worse than that, the tuner could also be controlled by /dev/adapter0/frontend0
I don't see why we would need to do that
the tuner is not controlled by /dev/video0
it's controlled by the bridge driver
that's a very important difference
the bridge driver doesn't have to do much
the translation is often tiny
but there's a v4l2 ioctl -> subdev ops translation happening
even if it's pass-through in some cases
pinchartl: I think you missed the point
as I said, the v4l2 devnode is not a DMA
it is not DMA for radio...
and it is not DMA for some video devices supported by bttv driver...
that uses an external cabling to do video overlay
so, if we'll be only shown the v4l2 devnode if there's a DMA associated to it...
no problem with that, if such devices exist, MC can expose their hardware topology
there will be some scenarios where there will be no v4l2 devnode shown at the pipeline
I'm fine, for those cases, with reporting the v4l2 devnode as a property of the bridge chip entity
because that's essentially what we do
the MC hardware topology exposed on such case will be useless for userspace, as you don't want to represent what devnodes would be used to control such pipeline
we control the bridge driver through the devnode
hmm... so your proposal is to add a new entity to represent the bridge driver?
this will be very confusing for DVB
no!
(and for ALSA as well)
not to represent the bridge driver
to represent the bridge
if you think about a bttv-based device without DMA
there's an analog tv-in connector
yes
such hardware have analog TV in and analog TV out
-> bttv -> (?) -> output connector for overlay
that's the hardware pipeline
(I'm not sure what the device is useful for though)
but we certainly have connector to connector devices
representing the pipeline is easy...
the same way we have mem to mem devices
but where to put /dev/video0 there?
(same applies to /dev/radio0 on radio devices without DMA)
there's two cases to consider
if you think about a bttv
the bttv is the main component
an the bttv driver the master driver
it's the one that creates /dev/video0
handles the v4l2 ioctl to control the bttv chip
and delegates other v4l2 ioctl to subdevs through subdev ops
so in that case it's logical to associated /dev/video0 with the bttv chip entity
the rule is pretty simple, an entity directly controlled by a device node should report that device node as a property
(in most case through struct media_entity_desc.dev.{major,minor} or similar, the exact naming hasn't been agreed on fully)
(but you get the point)
another case to consider
which we don't support at the moment
is connector-to-connector pipelines in an FPGA
pinchartl: on your example, there's an one to one mapping between a bridge driver and a devnode
in that case we have IP cores associated with subdevs
but if you take a more complex device, like cx231xx...
but no master anywhere in the pipeline
and thus no driver to create a v4l2 device node
the same bridge is used for V4L, input, alsa, DVB
but that's unrelated to MC for now, it's a v4l2 problem
so, one physical entity has several API entry points
the cx231xx even have several subdevs inside
like the analog TV demod
the internal subdevs could be represented as entities, that's not a problem
yes, but representing cx231xx as an entity with one devnode property won't work
how to split one chip into subdevs/entities is a case-by-case decision, it's a trade-off between abstraction and complexity
the same applies to all alsa devices...
cx231xx will have a DMA engine used to transfer video frames to memory, and a DMA engine used to transfer audio, won't it ?
in that case the DMA engines will be represented by entities
as typically, the mixer and the audio handling is at the same bridge device
with associated devnodes
Note that the entities with video/vbi nodes represent DMA engines (each device node maps to a DMA engine in the bridge hardware).
pinchartl: associating the bridge with DMA is wrong
I mean:
associating a devnode with DMA is wrong
not all devnodes have DMA
I would represent a bttv as a entity of its own, linked to entities representing DMA engines.
and, on DVB, the DMA is not visible to userspace...
hverkuil: I would do so as well
there's something after DMA at the DVB case, for several types of hardware
A DMA engine is an entity with an optional device node.
hverkuil: although I'd have to check the bttv hw architecture first to confirm that 100% :-)
hverkuil: I agree with that
it makes sense to represent DMA engines as entities, as they are hardware entities/components
it's really two separate issues
so this all works fine for data transfer, just not for control.
pinchartl: the bttv devices with external overlay cabling are not very common nowadays
first how to report a device stream-based topology through MC
and then how to expose the control points
(mchehab: out of curiosity: which bttv model has external overlay?)
I don't think the MC stream-based topology model should be dictated by our control architecture
they used to be found in the old days where the DMA transfer costs were high and the DRM cards didn't have performance for video
hverkuil: I don't remember the model I used to have
this was, actually, the first TV board I got here
I can't remember seeing support for external overlays in bttv.
I think it was an old PCTV model
but this device has long gone... I used to belong to a friend. I had to return it to him a long time ago
I don't think we should care too much about that for bttv, but it can certainly be a model for other recent devices
I'm having the same problem with FPGAs
I think it used the overlay API to setup the screen size...
where it's pretty easy to create a connector-to-connector pipeline from video IP cores
hmm... no...
it uses a control to tell what's the overlay color to be replaced
the DRM code fills a rectangle with such color...
and right now I'm not sure how to support that from a driver point of view, MC is not even my concern yet :-)
and the overlay chip replaces that specific color with the TV image
so it's a two inputs, one output chip ?
pinchartl: a TV set may have a pipeline without any DMA on it
mchehab: correct. same case as with the FPGA
pinchartl: yes
the FPGA pipeline is represented in DT
one input from a VGA in, one input internally, from the TV board, and one output with the mixed image
with all IP cores having one DT node
and the DT nodes being linked using phandles
the problem is that there's no central driver responsible for create a media device and video device in that case
there's lots of subdev being instanciated
but nobody to group the subdevs together
that's not even an MC problem
it's a driver issue
there's no master component in the pipeline
well, very likely one "virtual" bridge driver would be needed to take care of the hole setup
anyone familiar with regmap subsystem :D
no matter if we'll need an extra driver for that or not...
a virtual bridge driver would be fine with me
but
there's no DT node to bind that driver to :-)
a pipeline like that will still need V4L2 devices to control some things there, even without any DMA
i have a write only register, and not sure if there is something similar with regmap_update_bits for this write only reg
mchehab: the only reason why a v4l2 devnode is needed for that pipeline is to start/stop streaming
but yes, at the moment it's needed
and of course a media devnode is needed
but it's a separate issue
_daniel_: crope did some work with regmap API. Anyway, we're in the middle of a meeting here... please either wait for us to finish or ping him in priv or at some other channel
pinchartl: on a digital TV pipeline for a TV hardware, the v4l2 devnode will also be needed to control the mpeg TS play/stop/pause/continue
yes
that's pretty much stream start/stop
to put all at the same page, such pipeline would be:
in a slightly different way
tuner -> demod -> demux -> mpeg TS decoder -> DRM
tuner/demod controlled via frontend
demux via demux devnode
mpeg TS decoder via v4l2 devnode
IMHO, it would be easier to represent the control points as control entities
so, such pipeline could be represented as:
tuner -> demod -> demux -> mpeg TS decoder -> DRM
| | | | |
FRONTEND dev DEMUX dev V4L2 dev DRM dev
that would mean to add a third type of pad... control pad
no
I don't like that
your representation is fine
and the devnodes need to be reported
mchehab: it's what I had in mind as well
but not through entities, pads and links
entities, pads and links model the data stream
I don't want to abuse them to expose control points
neither now, given that you'd have a hard time doing so while maintaining backward compatibility
nor if we had to design this from scratch
pinchartl: not quite: we decided to - for now - only model the data stream. There is nothing preventing us from using it to model other types of connections.
I agree with hverkuil
we have software parsing MC pipelines in userspace
going through links
whether entity/pad/link is a good model for this is a separate issue.
hverkuil: How would you tell that apart from data pipeline?
without checking the link type, as there's no link type
The links were always meant to model the flow of data, not control.
pinchartl: let's first discuss the "perfect" model...
but regardless of backward compat, I think using pads/links/entities to report control points is a bad model
then we should see how to migrate to it doing the required backward compat bits
control links are links between control pads. That's a pad with MEDIA_PAD_FL_CONTROL set.
that's what I had in mind.
hverkuil: I dislike it a lot
hverkuil: What would you use control pads for?
to tell that entity A controls entity B, C and D (or whatever)
I agree with hverkuil
I.e. a frontend entity controls a tuner and demod entity.
total nack from my side :-)
as in mchehab's example above
the control entities have a 1:n mapping
hverkuil: And that control would be on hardware, right?
sailus: no, it's software control. userspace -> devnode -> entity
sailus: it generally is, usually i2c
so, it require control entities and control links/pads to represent
stop please
the control is hardware
there's no way I'll accept control entities, pads and links at this point
it represents the address/control bus
The MC graph is intended to model the hardware. Not software.
currently, the MC graph represents only the data buses
The fact that an IOCTL on a video node would indeed control e.g. a tuner is what this interface historically has done.
It's an exception to the model. I don't think we should embrace that.
sailus: your're not seeing the broader case with DRM, ALSA, DVB, ...
I'm open to alternatives.
the subdev direct control of the hardware is actually an exception, not the rule
mchehab: Instead, in entity properties, I'd report the node used for controlling an entity. This could be a sub-device node, for instance.
sailus: how to represent the tuner's control devnodes?
on a complex scenario, the tuner has several control pads:
- v4l2 video devnode;
- v4l2 radio devnode;
- v4l2 SDR radio devnode;
- DVB devnode
- subdev devnode
only one can be active at a given time
There might have to need more than one node, depending on how we decide to implement the interface.
V4L2 has generic sub-devices whereas other sub-systems at hand don't (AFAIU).
mchehab: the v4l2 video, radio and SDR devnodes would *not* be reported by the tuner entity. they control the tuner indirectly
I agree with Laurent we should keep links and pads data-only, and have a different solution for controlling the sub-devices.
mchehab: add a vbi devnode, which can be active at the same time as the video node.
sailus, pinchartl: how do you propose to use it, if not using a graph?
s/use/represent/
so far I don't see many arguments supporting the need to report more than one device node per entity
Hmm, mchehab+hverkuil vs pinchartl+sailus :-)
:)
let's take a clearer example to explain the concept of indirect control
tuner -> demod -> demux -> mpeg TS decoder -> DRM
pinchartl: the devnode used to access the tuner entity depends on its usage
that's actually a good example
on the DRM side
if you decide to turn the display off
the whole pipeline should be stopped (let's assume there's no DMA engine anywhere, so no recording use case)
but you wouldn't consider reporting the DRM devnode as a control point for the tuner
same thing for device nodes closer to the tuner
the tuner's v4l2 subdev node should be reported by the tuner entity
the v4l2 video, radio, sdr, vbi, ... devnodes control the tuner indirectly
they should *not* be reported as control point by the tuner entity
pinchartl: how to indicate what devnodes are associated with that particular tuner?
you mean devnodes other than the subdev devnode, right ?
a real pipeline typically has 2 or 3 tuners for real embedded hardware
the subdev devnode is reported directly by the tuner entity, so that one is easy
there's no need for a tuner subdev devnode in the above example
then the tuner entity should not report any devnode
let's take a simpler example
tuner -> bttv -> dma
or rather
the tuner needs to be controlled together with the demod, as some parameters, like bandwidth, should be adjusted at the same time at both tuner and demod
tuner -> bt878 core -> bt878 dma
pinchartl: the tuner should be reported as a devnode, if the device also supports analog TV
(leaving audio dma aside for now)
no !
the tuner is not a devnode
s/devnode/entity/
sorry
ah, that's better :-)
so
tuner -> bt878 core -> bt878 dma
three entities in the pipeline
it could be a devnode, though (a subdev devnode)
no
the tuner isn't a devnode
it could be controlled by a subdev devnode
but the tuner *is* not a devnode
so, you're suggesting a separate entity for the subdev devnode?
the tuner is a chip that transforms a modulated signal in a baseband signal
in the analog tv case the chip also usually integrates a pixel decoder (adc)
so the tuner chip produces digital video
that's what the tuner is
it's not a devnode
actually, the ADC is a separate entity
analog demod
the same way a DMA engine is not a devnode
it's a piece of hardware that receives a data stream on one side and sends it to the memory controller on the other side
it is not a devnode
we need to stop talking about entities being devnodes
they *are* not
the can be supported by devnodes, controlled through devnodes, have a driver that creates devnodes, be associated with devnodes, whatever
but they *ARE* not devnodes
it's not their nature
sure
it's not what they are
now that this is clear
I think we are in the same page here...
let's be precise now. the next person who tells me that a hardware chip is a devnode owes me a bottle of (good) whisky ;-)
nobody, I think, has doubts that the entities, right now, are not the devnodes, but hardware components...
with have both data and control buses
so how do we find out which devnode controls which entities?
the data buses are currently mapped via pads/links
larsc: that's the whole point of the discussion, thank you :-)
back to the example now please
tuner -> bt878 core -> bt878 dma
that's the hardware
but the control buses are badly mapped at the current state of teh MC
tuner is an analog tv tuner chip containing both a tuner and demod in the same I2C-controlled chip
pinchartl: the demod is a separate entity
bt878 core and bt878 (video) dma are part of the same bt878 chip
with a different I2C address
ok
(maybe except for one or two exceptions)
tuner -> demod -> bt878 core -> bt878 dma
is that better ?
yes
I thought that in the analog tv case the tuner and demod were integrated in the same chip
in most cases
no. what's more common is to have tuner + analog audio demod integrated into one chip...
ok
my bad
so
tuner -> demod -> bt878 core -> bt878 dma
but the analog demod is either a separate component or integrated with the bridge
let's assume a non-MC pipeline
without subdev nodes
the bttv driver creates /dev/video0
and everything is controlled through /dev/video0
so far, so good ?
yep
in this case
we have four entities
tuner, demod, bt878 core, bt878 dma
the bt878 dma entity should report /dev/video0 as its controlling devnode
(currently, MC would map it as 3 entities)
I don't think there's any disagreement there
currently MC wouldn't map it, because the bttv driver doesn't support MC :-)
pinchartl: this is how it is right now, but mapping /dev/video0 at DMA is actually a mistake
(it does, for cx231xx, though ;) )
I don't think it's a mistake, and here's why
/dev/video0 is created by the bttv driver
so the bt878 core
that driver is bound to the bt878 chip
the bttv driver controls the bt878 chip directly
(that's what it has been written for)
so /dev/video0 controls the bt878 chip directly
yes, but the one that does that is the bt878 core, and not its DMA engine
worse than that, on USB devices, the DMA engine is out of the USB bridge driver
wait
the DMA is inside the EHCI driver
basically, the problem is that the MC only shows a subset of the video0 functionality: the data transfer.
what do you mean by "the one that does that is the bt878 core" ?
the one that does what ?
hverkuil: please wait until I finish my explanation
I mean that it is the bt878 core that actually controls /dev/video0, and not the DMA engine of bt878 chip
no
that applies to both devices with DMA and without DMA (with the hardware overlay chips inside)
you can't have a piece of hardware controlling a Linux devnode
the linux devnode is an interface to a piece of hardware
if you ask the engineers who have designed the bt878, I'm pretty sure they had no idea what a devnode was, and they haven't included circuitry to interface with udev :-)
the bt878 engineers created a set of registers to control the hardware
the devnode is the linux way to talk with those registers
it's the devnode that controls the device, not the other way around
/dev/video0 controls the bttv driver, which controls the bt878 chip
yes
the devnodes are thus associated with a hardware component:
the bridge driver registers, in this case
both the core (adc, scaler, ...) and the dma engine
/dev/video0 controls both the bt878 core and the bt878 dma engine
through the bttv driver
in the case of a PCI device, yes
in the case of an USB device, no
let's talk about PCI first
in this PCI case I believe it makes sense to report /dev/video0 as the control devnode for bt878 dma, and probably for bt878 core
however
the tuner is controlled indirectly
so I wouldn't report /dev/video0 as a control devnode for the tuner
the tuner driver exposes two interfaces
an in-kernel subdev interface
and an optional userspace subdev API through a subdev devnode
so from a tuner entity point of view, the only devnode that should be reported is the subdev devnode
and if there's no subdev devnode, then no devnode should be reported
yes, the control bus graph for the tuner is:
userspace can of course take the MC stream-based graph, see that the tuner is connected to a bt878 chip, and get the v4l2 devnode from there for indirect control purpose
[tuner subdev devnode] --> [tuner] <-- [bttv core] <-- [video0 devnode]
(for a tuner entity that would be exposed both via subdev API and via V4L2 api)
pinchartl: yes, if we expose the bttv core via a DMA engine, userspace could check that video0 controls the pipeline
but on a generic case where there's no DMA engine in the pipeline, userspace has no means to discover the devnode that controls such pipeline
who creates the /dev/video? for that kind of pipeline ?
the bridge driver
as I said:
(10:39:23) mchehab: I mean that it is the bt878 core that actually controls /dev/video0, and not the DMA engine of bt878 chip
I'm fine (I think) about the bt878 core entity reported /dev/video0 as its control devnode
especially when there's no DMA engine
but most probably also when there's a DMA engine
OK
the problem appears when there are multiple devnodes that can control the tuner
still on bt878 example...
bt878 driver actually exports 3 devnodes: video?, vbi?, radio?
all three can control the tuner, but there are restrictions
for example, on a typical pipeline, we have:
for video: [video0] <--- [bt878] ---> [vbi0]
for radio: [bt878] --> [radio0]
either radio or video can be active at a given time
right
so, the control pipeline should have the concept of active/inactive link
can I explain how I think this should be done ?
sure
in the previous case that was considering video only
tuner -> bt878 core -> bt878 dma
(the demod seems to be in the bt878, not outside of it)
(but it's a detail)
/dev/video0 is reported by bt878 dma and bt878 core
and controls the tuner indirectly
yes, bt878 has demod inside its chip
in the video+vbi+radio case
the tuner is also controlled indirectly by those devnodes
so I believe they should not be reported by the tuner entity
as in the pure video case
I'm not too familiar with the radio API, what is the radio devnode for in a bt878 device ?
/dev/radio?
yes
it is a V4L2 devnode
just like video or vbi
yes but what is it used for ?
created by the V4L2 core
FM tuning
the device can either provide FM or video+vbi at a given time
so it's for audio only ?
no. audio doesn't flow via radio devnode
I mean audio control ?
radio0 only controls the radio
audio comes via ALSA API
the stream ioctls don't work on a /dev/radio? devnode
(except for 2 exceptions: pvrusb2 and ivtv, with provides an audio MPEG-TS, but this actually violates the API)
it's complex enough if we don't violate the api...
but anyway
as /dev/radio? is created by the bttv driver
I don't think the tuner entity should report it
pinchartl: yeah, we're aware of that, but fixing this is hard
it should be found the same way /dev/video? is found
how?
for video we have tuner -> bt878 core -> bt878 dma
one entity (bttv core) is associated, in this case, with 3 devnodes, 2 with DMAs, one without DMA
userspace knows that the tuner is connected to a bridge
and can thus found the corresponding /dev/video? node
yes, that's a problem
but note that
for video we have: tuner -> bt878 core -> bt878 video dma *AND* tuner -> bt878 core -> bt878 vbi dma
there's no /dev/radio? or /dev/vbi? support in MC at the moment
the bt878 core entity might need to report more than one devnode, indeed
but not the tuner entity
actually, cx231xx should expose vbi?... not sure if radio is shown there... need to do some tests
if there was only video and vbi we could get away with it
as they both have a separate dma engine
so we have one entity for video dma and one for vbi dma
allowing to report one devnode each
I think that radio is also exposed
what we could do
but that's a hack I think
pinchartl: we need a solution to expose those non-DMA devnodes
on DVB, most devnodes are non-DMA devnodes
would be to export /dev/radio on the bt878 core entity, /dev/video on the bt878 video dma entity and /dev/vbi on the bt878 vbi dma entity
I don't like that though
btw, the hole discussion started due to that
so I think a better way would be to expose
- /dev/radio, /dev/video and /dev/vbi on bt878 core
- /dev/video on bt878 video dma
- /dev/vbi on bt878 vbi dma
basically, we need to split devnode representation on MC from DMA
and
- /dev/v4l-subdev? on tuner, if the tuner has a subdev
but *not* /dev/radio, /dev/video or /dev/vbi on tuner
pinchartl: I'm ok with that
good, that's a big first step :-)
so, now, the problem boils down to how to expose more than one devnode for a single entity, right ?
the way I (and hverkuil) are proposing is to represent the harware control logic as entities
on such case, a bttv core that supports video, vbi and radio would actually be represented as 3 entities
I see what you mean
each associated with an specific devnode
and I don't think it's the best way
we have more or less agreed about the MC representation of the hardware, and what devnodes need to be reported for each entity
yes
can we consider that as a good enough step for today and discuss how to report entities tomorrow ? :-)
we've been talking about this for 2h now
:)
and I have other urgent tasks to work on I'm afraid
sure we can continue tomorrow
np
What time?
thank you
same time as today?
How about one hour later than today?
I think that what we've achieved today is quite significant, even if it doesn't solve everything
fine with me
I have one hour meeting at that slot.
note that I have a meeting from 16:30 to 17:00 tomorrow
pinchartl: That would mean we have 1,5 hour time limit. :-)
so we could start at 15:00 and pause for half an hour from 16:00 to 17:00
Should we have someone write the most important points down so we can refer to this later on? :-)
Half an hour pause in one hour?
sailus: have you just volunteered ? :-)
Fine for me.
sorry, 16:30 to 17:00
pinchartl: Good.
16:30 at what time fuse?
GMT?
UTC+2
Finnish time
16:30--17 is the pause.
So 15:00 -- 16:30, and if needed, continue after 17.
mchehab: Would that work for you?
that's 13:00 - 14:30 UTC
if we're not finished by 16:30, then I'll go home and I can continue from there. But then I might be a bit later than 17:00, depending on traffic.
Right, not everyone always uses Finnish time. I keep forgetting that. :-P
http://www.timeanddate.com/worldclock/meetingtime.html?iso=20150305&p1=101&p2=45
works for me
hverkuil: that's 15:30-16:00 for you, right ?
http://www.timeanddate.com/worldclock/meetingdetails.html?year=2015&month=3&day=5&hour=13&min=0&sec=0&p1=101&p2=45
y
sailus: that world clock meeting planner helps ;)
I always use it when I need to schedule any meeting, as there are always people outside Brazil ;)
pinchartl: ping
jmleo: pong
ever used GPIO based interrupts with adv7604 ?
hverkuil: I've rebased the Xilinx patches on top of your for-v4.1e branch. should I use for-v4.1f now ?
jmleo: no
jmleo: if your GPIO controller works correctly that works out of the box
larsc: there is no request_irq() in the driver, what drives the adv irq then ?
right
we should add that
pinchartl: it's better since I plan to post the pull request for that on Friday.
pinchartl: on the other hand, I'll rebase that branch later...
:-)
code-wise it won't matter.
I don't expect conflicts
I'd stick to for-v4.1e
I'll send v6 to the list, and will send a pull request once Mauro merges your patches
jmleo: right now the driver relies on the bridge driver to do the interrupt handling, but we discussed this before that the driver should support handling this on its own
larsc: as the adv7180 does...
jmleo: so I did in fact confuse gpio irq with gpio hpd. I thought irq support in adv7604 was working, but clearly I'm wrong.
hverkuil: ok
hopefully we can get this merged for v4.1
finally :-)
hverkuil: nevermind :) I need it, so I can try to implement it...
we are writing a device driver for a new platform and are curious about some of the v4l ioctls
it isn't entirely clear how some of the newer ioctls should interact e.g. how should s_selection interact with let's say s_fmt?
I believe s_selection if you use cropping/scaling effects the buffer size, which you ultimately set through s_fmt
some more legacy apps (e.g. vlc) adjust the cropping rectangle using s_crop AFTER calling s_fmt, which would cause you to use wrong buffer sizes in e.g. queue_setup if using vb2
Thunderbird: this was covered in depth at the media summit at last years ELC conference.
Let me see if I can fidn the notes...
http://linuxtv.org/news.php?entry=2014-05-17.mchehab
Hmm, no mention of the actual use cases discussed. Hmmm....
I'm pretty sure calling s_crop after s_fmt is a VLC bug. It's likely that whatever card the user had supported one or the other (not both), and didn't realize they could interact with each other.
The VLC V4L2 implementation is kind of a mess.
(for example, if you driver exposes a mute control, the whole VLC process will throw an ASSERT)
Thunderbird: this is a pretty good topic that discusses the new selection API and order of operations relative to s_fmt/s_crop: http://comments.gmane.org/gmane.linux.drivers.video-input-infrastructure/66011
back
what is also a bit confusing is how to handle HDMI capture cards
the source dictates the resolution
Well you're in luck - there *aren't* any HDMI capture cards currently supported under Linux. :-)
especially mode changes are fun ..
the closest I saw was the hdpvr device, which seems to have some hdmi support
You should email Hans on the linux-media mailing list. These are all good questions, and as somebody who has done HDMI drivers under Linux I have similiar questions about how he expects those cases to work.
(in my case, the drivers didn't implement the new selection APIs because I wanted them to actually work with existing applications).
The HDPVR does *not* have HDMI support - it has HD component capture only.
devinheitmueller: I think hverkuil did some work on that, but for some hardware that it is used only internally
I'm using the new APIs on purpose, because our use case needs some hacky things, which can be done in a nicer way using these APIs
mchehab: it's entirely possible that he got it working on his internal card, but it's not clear that he's ever publicly discussed the expected behavior.
our hardware is also internal, but we may upstream the driver (needs some convincing and need some IP of external companies cleared)
devinheitmueller: yes
Steven and I were talking about this a couple of weeks ago since we're doing an HDMI capture card, and there were lots of scenarios that the API doesn't really appear to handle currently.
(again, it's possible has some unpublished expectation for how these edge cases should behave)
Oh, and of course there's no indication that *any* applications out there actually use the APIs, so it's not like you can start with the apps and extrapolate how the driver should be have.
ignoring cropping/scaling initially I would have s_fmt just return whatever resolution the device was set
it was unclear what g_fmt should do, should it return what dimensions are set into your internal driver structures or what the device is actually at?
devinheitmueller: well, as you're working on that, if you're meaning to upstream your work, use the ML to sanate the doubts and submit patches to improve the hdmi support
Thunderbird: that's essentially what I did, which was good enough for gStreamer.
mchehab: it's highly likely that none of this will go upstream. You've made it so difficult that it's not worth my effort anymore.
and if there is no signal, I had g_fmt return ENOLINK/ENOLCK
but it would be nice to get some of the interactions clarified, so I will write a detailed email with what I have done and the issues I'm running into
Thunderbird - I used ENUMINPUT to indicate no signal, but yeah that's probably good enough.
(in my case, I got it working with gStreamer, which is all I really cared about)
enum_framesizes is another fun one, should it just list your active resolution? hdmi capture you technically support arbitrary resolutions (limit is max pixel clock)
query_dv_timings is probably the best one to use now
for timings
devinheitmueller: I don't understand why you think this is so difficult
Thunderbird: most HDMI receivers support a fixed number of resolutions, typically what is mapped in their EDID info.
everyone else is submitting patches and getting them merged
in our case we don't really have a limit
Thunderbird: Ah, ok
mchehab: You only need to review the archives on Steven's attempt to submit the Viewcast 820e driver upstream for prime examples of all the bulls**t you guys are demanding which prevents good code from getting upstream.
I wonder how I would ever get my driver upstream, I think our device would be a misc device since it has video capture, custom GPIO and it acts like a storage adapter as well
mchehab: I'm not really going to argue with you on this again. You've dillusioned yourself into thinking things are working well and that driver quality is increasing. You're wrong on both counts but you're too close to the problem to see it for yourself.
devinheitmueller: steven seemed to not have the time to properly use the API
it was proposed to submit his driver at staging
mchehab: Yeah, let's REQUIRE Steven use APIs that are being in use by exactly zero drivers.
he didn't answer
He put three days into cleanup work, and it was still rejected. He gave up.
thanks I will prepare a good email, bbl
Thunderbird: Good luck!
devinheitmueller: the api is being used by some drivers under platform
Oh you mean with SOCs that nobody has access to unless you work for Samsung/TI/Freescale/whoever? And with proprietary applications that nobody has access to? Sure, ok.
devinheitmueller: the API should be the same
You want to demand that an API be adopted for new drivers - spend the time to demonstrate it working with a retail tuner.
Viewcast 820e is also a product that very few people have...
it is for professional usage
You would like to think so, but there are *plenty* of reasons why that would be wrong - mainly because the use cases had never been hit before and thus nobody knew what the answers should be.
also, you're free to comment/review any API
I tend to wait for a reasonable time before merging API patches to make sure that everyone interested can comment
Until somebody puts an API into actual use, that's when you find the edge cases nobody thought of.
I'm all to discuss the needs and improve the API
but sending a very big driver without discussing the API first is a big no
if such driver needs API, API should be discussed first
Looks at the wonder that is videobuf2 - desptie the *requirement* that Steven use of for the 820e, it wasn't being used for even a single standalone video capture device until *****I***** ported em28xx. Takes balls to demand somebody else shake all the bugs out of your API and not being willing to port a single tuner driver to it first.
as the API affects not only an specific driver
Steven intentionally didn't adopt the new API/framework because there were absolutely zero applications publicly available using them.
VB2 didn't start with em28xx
need to go
bbiab
That was the first piece of consumer hardware that supported it. It was totally unvalidated for a general purpose capture card until then. - which is pretty pathetic.
ttyl.
Is it possible to get i2c adapter id from the device node ?
prabhakarlad|hom: I think you have to use an alias for that
headless: any pointers doing this ?
I wanted the bridge driver to know remote endpoints i2c adapter id and address
prabhakarlad|hom: drivers/i2c/i2c-core:i2c_add_adapter()
of_alias_get_id() call
you need to have "aliases" node in your DT, where you should have something like "i2c0 = &i2c0;" (&i2c0 being a label ref to your I2C device node)
cheers that helps!
good
is there some docs on how interlaced video is handled from a driver point of view (i.e. is driver responsible for interlacing or does it do some magic calls to the buffer to indicate even/odd or??)
Thunderbird: disregard the total misinformation you got from devinheitmueller.
HDMI is fully supported, and so is the interaction between the various ioctls (fmt, selection, etc).
Those APIs are supported in various platform drivers and the vivid driver.
I would look at the vivid (virtual video) driver first since that supports HDMI (among others) with any combination of cropping, scaling and composing.
Unfortunately, the details of how cropping, scaling and composing interact is not documented properly today, the vivid driver code is the only place you can find this info.
Documentation/video4linux/v4l2-pci-skeleton.c is also a nice starting point and it supports HDMI as well (or at least, it shows some skeleton code).
Any questions, just ask me. I've been working with HDMI and linux for years and pretty much all of the APIs related to that in V4L2 were designed by me.
Devin should stop whining and just *ask on the mailinglist*.
hverkuil: Notes from yesterday: http://www.retiisi.org.uk/v4l2/notes/v4l2-mc-dvb-2015-03-04.txt
mchehab: ^
Thunderbird: regarding apps that can handle HDMI: the v4l-utils git repo has them: both v4l2-ctl and qv4l2 handle HDMI capture just fine.
sailus: "In the indirect control case, the control device node(s) should be reported by the entities that are directly controlled by them."
Do you mean that e.g. a tuner entity will report the /dev/video0 device node through which it is indirectly controlled?
or am I misinterpreting that sentence.
sailus: thanks for the notes
hverkuil: I think you're misinterpreting that sentence. what I understood from that phrase is that the bridge driver should report the /dev/video0 device node, and not the tuner
I hope so :-) That would make more sense.
yep
hverkuil, mchehab: Yes, the bridge driver would report it, not the tuner.
hverkuil: correct, the tuner entity shouldn't report the /dev/video? node
pinchartl: did you get my email?
jberaud: yes, thank you. I haven't had time to look into it though
I've just read the e-mail, not the code
why did you have to get rid of smiapp-pll ?
hverkuil, pinchartl, sailus: let's continue our discussions?
mchehab: Hello!
Moikka!
Bom dia!
pinchartl: because at some point, it became more a problem than a solution
mchehab: ok
mchehab: ok
pinchartl: I knew exactly what I wanted to do but couldn't get it to do so
ok, so we've agreeded so far about the need of a way to properly describe the devnodes...
we just didn't agreed yet how ;)
jberaud: let's discuss that a bit later. I don't really want to get rid of it for the mainline version of the ar0330 driver, so we'll have to find a way to fix that
mchehab: ok
as discussed, we need a way to map a devnode to one or more entities...
and a way to map two or more devnodes to a single entity
I think we have indeed pretty much agreed about which entity should report which device node, but not how to do so
sailus and hverkuil, does that match your impression of yesterday's meeting ?
pinchartl: I perfectly understand that, I just didn't bother to adapt it because I knew exactly how I wanted to calculate the plls and didn't manage to do so with smiapp
pinchartl: Yes, it does.
jberaud: sailus is the smiapp-pll expert :-)
hverkuil: how about you ?
pinchartl: what I didn't get is why you were using it, I thought smiapp-pll were precisely for smiapp cameras
pinchartl: I agree.
jberaud: because the ar0330, while not being a smia sensor, reuses the smia pll model as-is
ok
then
I think we could also easily agree on the following
let's avoid doing two meetings at the same time here ;)
ok we'll discuss this later
jberaud: thanks!
if we had at most a single device node to report per entity, we would just use media_entity_desc.dev (or .v4l, ...)
and we would be done with it
but we sometimes have more than one device node to report
pinchartl: yes, but this is not the case
general agreement on that ?
we have both 1:n and n:1 mapings between devnodes and data flow entities
1:1 is easy, we could just use media_entity_desc.dev
1 devnode : n entities is easy too
we could just report the same devnode through media_entity_desc.dev of all n entities
it's n:1 (or, more generally, n:n) that we have no solution for at the moment
sailus and hverkuil: still good so far ?
pinchartl: Ack.
pinchartl: that's why I think that "pure" devnodes should be an special type of entities (control entities)
I'm not entirely certain about the 1 devnode : n entities case.
What we're lacking is the ability to report multiple device nodes for an entity, indeed.
mchehab: please wait. before proposing a solution, let's agree on the problem
hverkuil: frontend entity is associated with both tuner and demod
hverkuil: I'm not saying that in the 1:n case we must use media_entity_desc.dev, but just that we could
mchehab: but what if the tuner has a subdev node as well? then you have two devnodes associated with the tuner. Which does what?
how to support 1:n hasn't been decided yet. media_entity_desc.dev is a straightforward solution, but maybe not the best one
hverkuil: that's a n:n case
hverkuil: a tuner may have lots of devnodes directly or indirectly associated with it
And more general: how is the software supposed to interpret identical devnodes used in different entities?
1:n is when all n entities have 0 or 1 devnode to report, and the same devnode is reported by all n entities
and we need a way to activate/deactivate the active association with the tuner
And is a device node from a DVB frontend considered a direct or an indirect device node for a tuner?
hverkuil: I would say indirect, but that could be debatable
pinchartl: so when you say 1:n is 'easy', I say: well, the devil is in the details here :-)
hverkuil: ok, let me rephrase that. 1:n could be easy from a reporting point of view, we have an api (media_entity_desc.dev) that already supports it, even if we might decide that it's not the best api for the job
is that better ?
OK, for now :-)
good :-)
so the problem pretty much boils down to
- coming up with a new API for the n:1 and n:n cases
- possibly using that API for the 1:n case instead (or in addition to) media_entity_desc.dev
yes
good, we have our problem statement :-)
tentative ack
sailus: ack on the problem statement ?
pinchartl: actually it could be a new API or to use the existing one
I think you need to add 'direct vs indirect device nodes': how to use/detect. But lets leave that for now.
mchehab: there's no support for this in the existing MC API, or, at least, no standardized way to do it
That's something between the problem and the solution, but I agree. :-)
pinchartl: this is something that you're refusing to accept, but if you create a pure devnode entity, the current API fits
ok. I could try the same trick as yesterday, tell that we've made a good step and leave the rest for tomorrow, but I don't think it will fly :-D
no, it won't ;)
I would be OK with that, I am still not convinced that that is a bad idea at all.
mchehab: creating pure devnode entities is an extension of the MC API, or at least of its usage (it doesn't require new ioctls, I agree on that), so it's not 100% supported by MC today
pinchartl: I'm not against a new API, but I suspect that it will be identical, or almost identical to entity/link/pad model
Obviously it would have to be a SW entity.
it doesn't map to any hardware, but it does map to a piece of dvb/v4l2 core functionality
hverkuil: I wouldn't call it a software entity using today's vocabulary
today MC entities are defined from a data streaming point of view
in that context a software entity would be for instance a kernel-side software demux
Not really, a flash device has no data.
yeah, a devnode is actually a piece of software that converts from Linux API dialog into a hardware dialog
(a hardware abstraction layer)
hverkuil: that's why the flash has no pad or link ;-)
mchehab: no, I don't like the name software entity for that
a devnode isn't an entity in today's sense of entity
pinchartl: it is a control entity
we have no control entity concept
adding such a concept is what you'd like to do, but we don't have it today
well, call whatever you want, but it is an entity that controls a piece of the hardware
so I don't like the name "software entity"
but, agreed, that's a small naming issue
we can leave it aside for now
as I also don't like the concept of control entity :-)
I always saw the MC as a way to describe the system (for want of a better word) in terms of blocks and links. Whether a block is mapped to software, IP core or hw is immaterial and if was never meant to be constricted to that.
I share this view
I don't :-)
:)
hverkuil: Software based entities may make sense in some cases, but that will easily get out of hands.
sailus: on a FPGA-based devices, everything is software
For well defined functionality and components that the user space expects to be there (demod for instance), yes.
whether a block is mapped to software, IP core or hw is immaterial, sure, but in my opinion that applies to entities in the current sense of the term. not to "control entities", whatever they would be defined as
mchehab: Well, sort of.
the control entity you want to create is something totally different than the entities we have today
so?
But would you model what a perl program does using pixel data with an MC entity, for instance, if that program was run on an external CPU? :-)
for instance
would you create a control entity for every subdev node ?
pinchartl: I wouldn't create a separate control entity for subdevs
what makes the subdev devnode so different from the video devnode that it doesn't deserve a control entity while the video devnode does ?
an entity that is mapped via subdev API should just use media_entity_desc.dev
that doesn't answer my question :-)
why for video devnodes and not subdev devnodes ?
because the subdev devnode has an exact 1:1 mapping
sailus: perhaps. We do that today as well if it is firmware running on an external CPU. From the PoV of the application it really doesn't matter if an entity is backed by hw, sw or fpga. It's just a block with inputs and outputs.
fundamentaly they're both control points
that doesn't apply to the non-subdev devnodes
a video0 devnode has indirect control over the entire pipeline
a frontend devnode has control to tuner+demod
not always
in many MC-based drivers devnodes have no control over the pipeline configuration
s/devnodes/video devnodes/
in more classical v4l2 devices they do, but that's not always the case
pinchartl: I think reporting the device nodes related to entities would make this easier for the user space.
oh, I totally agree with that
pinchartl: for me that's a good reason why those should be reported as something separate from the DMA
we should report which devnode(s) are associated with each entity
A device node can: run an DMA engine, provide direct control to the associated entity, and/or provide indirect controls to other entities.
for example, userspace needs to know if, on a MC-based video devnode, the video node controls the pipeline or not
I believe each entity should report the devnode or devnodes that directly control it
it should have some sort of graph telling what MC entities are controlled directly or indirectly by a devnode
but I don't think it should be done by creating entities for devnodes
pinchartl: you need to report also the indirect controls
I should it should use a property api
s/I should/I think/
indirect control is a different problem
I'm not against a property API, but this is a different issue
reporting it explicitly is an issue
an entity doesn't know which devnode(s), if any, can be used to control it indirectly
an entity can expose an internal kernel api (v4l2 subdev ops for instance)
pinchartl: let's imagine the following pipeline:
and a direct userspace api
that's all the entity knows about
it doesn't know through which devnode(s) its internal kernel api can ultimately be called
[tuner] -> [video demod] -> [video dma]
-> [vbi dma]
let's get 2 different drivers...
on driver 1, the video0 devnode can control the entire pipeline
wait
on driver 2, video0 devnode controls only the video dma engine
in that example
you have /dev/video0 and /dev/vbi0, right ?
how the software will know that it will need to manually talk with tuner and video demod, instead of using /dev/video0 and /dev/vbi0 to program the hardware?
subdevices will not know (and cannot know) which other entities indirectly control it. But the entity associated with the device node will know which other entities that device node indirectly controls.
pinchartl: yes. I didn't put the devnodes there, because we're using MC, right now, for a pure dataflow pipeline
hverkuil: and that indirect control could also depend on runtime parameters such as the select input
mchehab: we should probably report, for video devnodes, whether they allow full pipeline control or whether the subdevs need to be configured separately
sure, but my point is that subdev entities cannot return information about, say, video nodes that indirectly control it. They simple won't know that.
pinchartl: it is worse than that... we need to report what subdevs in the pipeline are controlled via a video devnode
hverkuil: agreed. if it wasn't clear, it's a point I was trying to make :-)
as some may not be controlled
mchehab: do you have an example of that ?
hverkuil: true, subdevs don't know. but the entity that creates the video devnode knows what subdevs are controlled
right
pinchartl: imagine a complex analog TV + digital TV + ALSA + DRM
with several tuners, several demods, several DMAs, etc
the devnodes there will only control a subset of the subdevs
could we talk based on a concrete example ?
I don't have any draw available for a contrete case, but let's assume a hardware with:
5 tuners (2 for DVB-T, one for DVB-C, one for DVB-S, one for analog TV), 2 DTV demods, 3 demuxes, 1 ATV demod, 2 ALSA outputs, 2 DRM outputs (for PIP)
in such case, for example the frontend devnodes will be associated with the 2 DTV demods, but there are 5 possible tuners (4, if we exclude the analog one) that could be controlled
that's the kind of complex hardware that MC has been designed to support, right ?
I mean, without MC, you can't really support that cleanly
yes
if only we could agree on *how* to do it with MC :-)
(actually, there would be possible to add something at DVB API, but those are hacks, IMHO)
in that case why don't we create a subdev devnode for every entity and don't use any indirect control, as we've done for the embedded v4l2 drivers (omap3isp & friends) ?
indirect control doesn't scale for complex pipelines
we realized that with v4l2 devices, and cooked up mc and v4l2 subdev devnodes as a solution
because tuning into a station require to set both the tuner and the demod at the same time, with the same parameters...
shouldn't we just apply that for this kind of device too ?
as the tuner needs to know what bandwidth, intermediate frequency, etc will be used by the demod (and/or vice-versa)
mchehab: Would a way to configure these things atomically help?
the pipeline configuration must be coherent
we have the same requirement for pure v4l2 devices
still, subdevs are configured independently
pinchartl: the IF is a tuner property...
and we validate the configuration at streamon time
that the demod reads
to know on what frequency the demod should tune
In V4L2 / V4L2 sub-device we'd need frame level control, implementation-wise it's very close to atomic configuration.
streamon time might not be applicable for dvb, but we could do something similar still
Well, it's atomic configuration + knowing which frame it gets applied.
it is not possible without huge efforts to change that
it would require a major rework to allow those things to be somewhat independent
and won't work with some devices, that can dynamically change the IF depending on the physical layer properties of a given channel
mchehab: I agree it wouldn't be trivial. But in this case I don't think it'd need to be fully generic.
mchehab: do you always have one frontend device node for every demod? Or is that just for this example?
hverkuil: well, it would be possible to use the demod without a frontend
hmm... actually no, I don't think so
it looks to me like me might be trying to shoehorn the existing dvb api for new kind of devices where it might not make complete sense
you can... but for testing equipments
like if we had tried to keep the indirect video devnode control model with mc
and professional ones
without subdev devnodes
I don't know enough yet about all kinds of dvb devices that can be expected, so I can't really be sure
pinchartl: it is just the opposite...
it seems that you're trying to enforce the way MC works for V4L2 for DVB devices, where it doesn't quite fit for tuner+demod
:-)
tuner+demod should be controlled together
it's pretty clear that we have different views :-D
doing anything different will break things, create a nightmare on driver redesign and provide a crappy end result
tuner + demod should have coherent configuration, I agree with that
but why does the configuration for both need to be passed through a single devnode ?
We've discussed atomic configuration over several device nodes previously in the context of V4L2.
I don't think it's been implemented yet though.
not saying I'm sure it would be better not to pass it through a single devnode, but I'd like to understand
pinchartl: the tuner is actually a downfrequency converter
the tuner takes a signal coming from an antena and cable, extracts a band, and moves it to baseband, right ?
it converts from RF (radio frequency) into a IF (intermediate frequency)
s/antena/antenna/
most tuners don't do baseband
they do IF
the demod is actually another tuner, with a "fixed" frequency (IF)
ok
with IF being "low" I assume
yes
the bandwidth + IF are two obvious parameters that should be identical on both devices
that we agree on
most tuners and demods nowadays don't actually use a fixed frequency...
they're programable
from userspace PoV, IF doesn't matter
just nitpicking
IF might matter
so, the entire IF setting is actually a negotiation between tuner and demod drivers
as you might want to avoid certain frequencies due to EMC requirements
pinchartl: that's something that depends on the hardware, and not something that we want userspace to control
as such change could, for example void a FCC approval
it's a system requirement
doesn't have to be pushed to userspace
yes, that's what I said (or tried to say)
but the constraints might come from outside of the tuner + demod
it is actually bridge driver + tuner + demod
Stupid question: if a demod (+tuner) is controlled by a frontend device, then why not associate the frontend device with the demod entity and mark it as 'this device node controls this entity and any upstream entities'. Actually, such a flag will work for most existing v4l2 drivers as well.
(on most cases, we just do tuner + demod, but several drivers allow the bridge driver to force both to use an specific IF)
hverkuil: because the very same tuner can *also* be used for analog TV
But not at the same time, I presume.
no
but we can't drop the tuner subdev and recreate it dynamically
(and I dont think that, in this case, doing that would be a good idea)
no, but you could change the configuration of its output links
So when you use it for analog TV you've changed the links around and the tuner is no longer upstream from the DVB demod.
connect it to the analog TV demod instead of the DVB demod
Or can the DVB demod be used for analog as well?
yes, that's what the DVB MC patches are doing AT<
ATM
then the DVB frontend devnode, if we associate it with the DVB demod, won't have an upstream tuner anymore
hverkuil: some demods support both analog and DTV
(well, actually very few)
I was afraid of that.
wait a second
in that case
let's imagine a tuner supporting ATV and DTV
and a demod supporting both too
I would map the demod as two entities
one for ATV another for DTV
let's consider that the tuner + demod are always associated together in that particular example
because, despite being at the same chip, they're separate IP
ok
but the tuner is the same...
but the tuner is the same IP
yes
with one extra thing...
so the tuner would be dynamically connected to the ATV or DTV demod entities
even if they're in the same chip
it is not just the bandwidth+IF that the tuner needs to know about a given DTV channel to tune...
pinchartl: yes
a modern tuner have several special filters that are enabled/disabled depending on the digital TV delivery system...
and some other parameters related to digital TV reception
makes sense
so, the tuner needs to have access to all the settings that are sent to the demod, in order to do its best to optimize the reception
I can certainly see the passband filter being different
yes
and probably some notch filters could be added/removed depending on the settings
brb
I agree that the configuration of both the tuner and demod need to be in sync
the way the kernel works right now is that we have a dvb "cache" structure
that it is set/get via userspace
both the tuner driver and the demod driver has access to this structure
and they both can read and store data there
(for example, the tuner could fine-tune the frequency and store the actual tuned frequency there)
also, this structure carries on statistics about the quality of the reception
there's no hardware communication between the tuner and demod along the lines of "tell me what IF you have be configured for", right ?
it's only software that keeps both configs in sync
(signal/noise ratio, carrier level, number of packages with error, ...)
pinchartl: that very much depends on the hardware
typically, satellite devices have both tuner + demod in the same chip
back
yes, if it's the same chip, of course
you could have a single register space
only heaven knows what's inside those silicon
without duplication of parameters
also, several devices use a firmware at the demod, with controls the tuner
there's even one driver (dib0700) whose implements tuner and demod as separate drivers, but have several callbacks in a way that the
demod can readjust several tuner parameters in runtime
like ADC gains, etc
ok
so, to summarize this
pinchartl: the "hardware" on several demod drivers is actually a really complex driver that does very weird things...
you have created a frontend DVB API to control both tuner and demod at the same time, due to the tight coupling between the two
and only the manufacturer (hopefully) knows what's there
see the drx-k/drx-j drivers, for example
pinchartl: yes
yet, on modern hardware, the pipeline needs to be reconfigured, as one demod could use more than one tuner
(just one tuner at a time)
also, there may have othere components at the "frontend" pipeline
could a device be conceptually imagined with a single tuner, with its output connected to two separate demods ?
(looked at drivers/media/dvb-frontends/drx39xyj/drxj.c just for fun: appallingly bad code. Brrr.)
like internal amplifiers, external amplifiers (several antennas have active components on it)
and some signaling devices for satellite systems
hverkuil: looks like a staging driver :-)
mchehab: could a device be conceptually imagined with a single tuner, with its output connected to two separate demods ?
pinchartl: no, I don't think so
drx drivers are really ugly
those devices have a firmware whose register addresses change from firmware version to firmware version
so, both the driver *and* the firmware need to have the exact same version
if such a device (one tuner and two demods, all active at the same time) existed, it wouldn't be easily supported by the existing DVB frontend API, right ?
they have a SDK for windows that produce both the firmware and the windows driver at the same time
pinchartl: a device that has <n> tuners and <n> demods is easily supported by DVB frontend API
but a device that has <m> tuners and <n> demods, where m != n is not properly supported
no matter if m>n or m<n
I understand that the frontend API works as long as at runtime the tuners and demods are coupled together (coupling can change dynamically), but doesn't work if you have one tuner connected to two demods at the same time. correct ?
the DVB api was conceived assuming m = n
pinchartl: that's correct
so, fingers crossed that nobody creates such a device, or do you think that will happen in the not too distant future ?
one tuner, two demods would likely require a direct control over the tuner
it is more common the case where m > n (e. g. more tuners than demods)
pinchartl: it is hard to foresee what'll happen
my question isn't totally innocent of course
:)
if there was a foreseen need to implement direct tuner control in the near future
but if m < n, doesn't that mean that one or more demods are inactive?
then we could use that already for MC-based DVB devices
I'm afraid I have another meeting starting in 5 minutes
food for thought during that time:
hverkuil: eventually more than one demod could be using the same tuner
Hans' idea of having the frontend devnode reported by the demod is interesting
for example, one demod for DVB-C another for DVB-T
slightly restrictive possibly as it won't support m != n
but m != n being unsupported is a restriction of the current DVB API
that could be useful, for example, during channel scan, to speedup tuning time
it would be nice not to carry it over, but it would require splitting tuner control in DVB
of course, in this case, the tuner should be direcly controlled via subdev API
hverkuil: what do you think ?
now I'm off for ~30 minutes
I hope the log won't be too long when I'll come back :-)
:)
I'm leaving for home as well, to be continued.
I actually need to take a break too
have to solve some issues outside
back in 30-45 minutes
good timing :-)
back
wb
Me, too.
I'm here, but I've to go out in 20 mins
mchehab: quick question: the satellite control part of DVB-S, can you describe what that does?
Does it control an actual dish or is it something totally different?
it controls the voltage level to the feed the antena IF amplifier (called LNBf)
there are 2 voltages: 13V or 18V
depending on the voltage, it tunes a different pollarisation
it can also send tones to the satellite system
those tones can do all sorts of things...
like selecting one antenna, on a multi-antenna environment
or select polarization (some devices use voltages, others use the tones, others both)
there's a protocol for that, called DiSEqC
this could be uses even to rotate the dish
btw, we *do* have devices right now that have one tuner and two demods
I think that HVR-4000, for example, have such setting
Can this be considered a satellite connector entity? That sits in front of the tuner?
basically, on those devices, it have one demod chip for DVB-T and another one for DVB-S
but just one tuner
the satellite control is separate from the tuner
(we need connector entities since alsa definitely needs those, but this would be a good example of how it is used in DVB).
different IP block
or even different chips
But for the HVR 4000 I assume only on demod is active at a time? That will still work fine.
s/on/one/
yes, but from MC PoV, it should be possible to see what demod is active
hmm...
not sure, really
you can already. The link from the tuner to the inactive demod would be marked inactive.
hverkuil: yes, true
Regarding satellite connector: that way you can have a MC graph like this:
[sat conn] -> [tuner] -> [demod] -> ...
actually, it is:
the frontend device node is associated with the demod and marked that it controls the demod and upstream entities.
[SEC] ---> [coupling connector] <-- tuner -> demod
the coupling connector would let the DC from SEC to pass to the antenna, the data to follow to the tuner...
and any tone generated by SEC to go to antenna too
the rest of the diagram is: [coupling connector] --> [LNBf] --> [antenna]
(this is a physical diagram)
the frontend device controls demod, tuner, sec, LNA (linear amplifier)
Are you drawing the direction of data or control?
and everything that it is at the antenna system
actually, I mixed ;)
direction of control:
This looks weird: '[coupling connector] <-- tuner'
[SEC] -> [connector] -> LNBf
[SEC] -> [connector] -> Antenna system
direction of data:
Antenna system -> [connector] -> [SEC] - For devices that support DiSEqC version 2, with is bi-directional
on that data connection, it is possible, for example, to enquire the number of antennas in the system
back too
the stream data flow is:
[antenna system] -> [connector] -> [tuner] -> [demod]
OK, that makes more sense :-)
hverkuil: agreed regarding the connector entities, we need them
Btw, for SEC, it is up to userspace to send and receive the control data to setup the satellite system
hverkuil: and there's already DT bindings for connectors
mchehab: remind me again: LNBf is?
it is a low noise amplifier inside the dish
so an external component.
https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcS1A2MRZ6FZE8dxddYXr4fLREkMWlmil_Jmh0i_VVgQvPot5lI10_7q8aFI
this is a typical satellite system:
http://www.eshopsatelite.com.br/img/p/30-288-thickbox.jpg
(actually, this diagram has both satellite and terrestrial)
that's getting a bit out of scope :-)
this is better: http://www.saveandreplay.com/Images/22khz_5sats.png
:)
pinchartl: I was answering some hverkuil's doubts ;)
yes, that's better
I'm not sure we should represent the antennas and diseqc switches
on that diagram, the diseqc switches are used to select between the 5 different LNBf
How would software know how many switches there are?
pinchartl: probably not, but could need to represent the components that command those
Is that by using DisEqC?
what the Kernel provides is an interface for userspace to send diseqc commands...
but it is up to userspace to identify the components on the satellite system
Since you don't control the outside world you shouldn't represent that as entities.
You stop at the connector level.
yet, Kernel should be able to represent the interface to the IP block that sends/receive DiSEqC commands...
and the component that sends power levels (0V, 13V, 18V) to the LNBf
(usually, it is the same component, but there are cases where there are actually two separate entities, each with its own I2C address)
there are 3 alternatives for satellite:
one chip with tuner + power supply + tone
3 separate chips, each with its own I2C address
and one chip for tuner, another for SEC (power supply + tone)
the integrated scenario and the tuner + SEC scenario are the more used ones
on all devices I know, there's an one to one mapping when the tuner is not integrated with SEC
but tuner <---> demod is a n:m mapping
pinchartl: btw, if we allow only "data stream" entities to be represented via MC, we can't represent SEC
as SEC is only a control entity
Well, it is sending data to the connector.
(e.g. tones)
mchehab: it's a similar situation as for lens and flash controller
controllers
hverkuil: it is sending control data only
not streaming data
and I think all these need to be supported
OK, proposal:
pinchartl: that's why I think we should have control entities
mchehab: it's a very different kind of control entity :-)
You can mark device nodes associated with entities as: 'controls upstream entities'
but I'm afraid I'm in another telco now :-/
OK. To be continued in that case.
I can multitask, but slowly
mchehab: needs to leave as well, right?
pinchartl: not really
SEC is a hardware-driven API to send HAL commands to the satellite system
a control devnode is a software driven API to send HAL commands to the registers of a device
both are hardware abstraction layers
the only difference is that SEC is hardware, and devnode is software
HAL in the sense of hardware abstraction layer
anyway, I have to go. please think about that. let's continue tomorrow
mchehab: nice try, but I'm still not convinced ;-)
talk to you tomorrow then
hverkuil: for the dvb frontend case I think an "upstream control" flag could solve the issue
I'm not sure it's the best solution
especially if we think about a future need to split tuner control in dvb
yeah, it would be useful, esp. for existing non-MC drivers that usually follow this model.
But SEC doesn't quite fit.
if that need was foreseen in the near future we should implement that first and use direct control only with DVB MC
but as it's not foreseen, we can't really push for such an implementation
I'll need to go in a few minutes as well.
what needs to be taken in consideration if we decide to use such a flag
is what will happen when we'll need to split tuner control in dvb
to make sure we won't be stuck
we'll end by needing to split tuner control in DVB, at least for some devices
there are cases where multiple demods are available to share the same tuner (or the same group of tuners)
when a device supports more than one different delivery system
like DVB-T and DVB-C
anyway, timeout for me
tomorrow, we'll continue
ok
have a nice day
hverkuil, thanks for your answer so far
I did look extensively at vivid and had another closer looks just now
am I correct that s_selection in vivid is 'patching' the format? if called after s_fmt the app may be aware of wrong dimensions
I suppose nowadays you should more rely on the dimensions returned by qbuf/dqbuf
lyakh, pinchartl: currently in the am437x driver I am storing the list of subdevs with name (device node name) and when bound callback of async I check the name and store it. and I used to override the name in sensor driver which is assigned by v4l2_i2c_subdev_init() with driver_name but as per sailus suggestion sub-device name should be (%d-%4.4x)
prabhakarlad|hom: and what's the question?
lyakh: I was just concerned how would I get the i2c address & adapter id without adding alias
in the bridge driver
are you using DT ?
yes
use the async subdev stuff
you shouldn't need the device address or adapter id in that case
larsc: i am using async stuff itself
then you should be able to match on dt node
using V4L2_ASYNC_MATCH_OF
and setting v4l2_async_subdev.match.of.node
pinchartl: i am using the same
but the problem is i have an array of subdevs
when bound is called i need to put the matched subdev in appropriate index
prabhakarlad|hom: you need to know when each individual subdev has probed? it isn't enough to know when all have probed?
lyakh: i need when each subdev is probed, so tht the bridge has a ptr to subdevs
prabhakarlad|hom: look at how the omap3isp driver does it in http://git.linuxtv.org/cgit.cgi/pinchartl/media.git/log/?h=omap3isp/dt
the code hasn't been submitted to mainline yet
it will be, very soon
ok
pinchartl: any plans for adding error reporting to the DMAengine API?
larsc: not this month at least
remind me what it was about :-)
I know it's a comment I made somewhere
but can't remember where
just the latest commit in that tree you just linked
ah, yes :-)
goldfish memory...
Thunderbird: what the correct behavior is depends on which features the hardware supports: basically which of the three (cropping, scaling, composing) the hardware supports.
vivid supports all combinations, which is great for testing, but it makes for difficult reading of the code.
qbuf/dqbuf does give you any size. Once VIDIOC_REQBUFS is called the capture size is locked in until the buffers are released again.
in my case my device supports cropping and scaling
I just noticed that legacy applications like to adjust cropcap after doing s_fmt, which is causing me some grief
You mean 'adjust S_CROP'?
CROPCAP just returns cropping capabilities
s_crop yes, which can fall back to s_selection nowadays
Thunderbird: BTW, I'm not normally on irc in the evening (it's almost 11 pm for me), so either ask questions on the linux-media list, or (if you have a bunch of questions) we can arrange a time on irc.
sure, no problem, I was working on the email anyway
It's perfectly legal to do S_FMT followed by S_CROP (or S_SELECTION).
my family is in NL as well, but I'm on the other coast, so I know the issues
thanks a lot for your time so far and have a good rest of your evening
Ideally all the computational complexity dealing with all these different rectangles should be moved to common code that drivers can use. It's on the TODO list, but low.
that would be quite useful
the interaction of the newer APIs with the more legacy ones is a bit confusing
S_CROP maps to S_SELECTION(TGT_CROP). That's all you need to know.
New drivers shouldn't bother with crop but implement the selection API.
The legacy ioctls are automatically converted to the selection API in the v4l2 core.
When you crop after S_FMT, then the scaler is programmed to try and scale to the S_FMT resolution. The crop rectangle may be adjusted if the scaler has limitation.
If you call S_FMT after cropping, then it adjusts the scaler to the new format, possibly adjusting the crop rectangle if scaler restrictions require so.
Basically, always try to achieve a result as close to what the current ioctl requests.
Whether you can crop while streaming is in progress depends on the hardware capabilities and if the FMT resolution remains unchanged.
S_FMT while streaming is always rejected.
(theoretically it is possible in specific cases, but nobody does it)
Let me know if you have any HDMI specific questions. Vivid is a good example driver for that as well.
And I recommend qv4l2 for testing HDMI drivers. It works very well.
and if you are writing a driver, use v4l2-compliance (also part of v4l-utils) for testing your driver.
I'm calling it a day.
ttyl
mchehab: fyi: I've got a reply from the LF - they say no registration is required to take part in a mini-summit only:) see you there!
'v4l2-ctl --list-formats-ext' gives me a lot of 'Interval: Stepwise 0.033s - 0.033s with step 0.000s (30.000-30.000 fps)' -> http://dpaste.com/3A2D3XZ
How to fix?
How to also check formats for mjpeg the other way?
fling: what driver are you using? (v4l2-ctl -D gives the driver name)
fling: not sure what you mean with 'check formats for mjpeg the other way'.
hverkuil: http://dpaste.com/0JCDWVX
I need a list of available resolutions+framerates for mjpeg
Because 7.5fps with uncompressed is not good.
pinchartl: this one is for you: uvc has a bug in the handling of stepwise frame intervals. It needs a index check:
if (fival->index) return -EINVAL;
otherwise enum_frameintervals just keeps going.
fling: it's a uvc driver bug.
you can workaround it in your code by not increasing the index for enum_frameintervals if a stepwise interval is returned the first time.
hverkuil: how then I get the info? :P
hmm hmm is there a patch I can apply?
you can edit the driver code manually. No patch yet, I just found it :-)
uvc webcams with stepwise intervals must be rare or this bug would have been found much earlier.
It is just a regular thinkpad x200 builtin webcam.
How can I turn off the led btw?
I don't like it is flashing right into my eyes.
no idea
tape it over?
This was the first idea.
very practical approach
the dilemma of a day: either open up the laptop, cut into the LED wiring, reverse engineer an LED driver, write an LED driver for it, add it as a subdevice to the laptop camera, write an app to control it OR tape it over
lyakh: I think first option would be easier ;)
prabhakarlad: :)
lyakh: I thought there is a controlling driver already.
fling: np, maybe there is one, sorry, was just kidding:)
lyakh: my old logitech (c620? or some other model) webcam led was not working on old kernels prior 2.6.35 or something.
I thought most of webcams should have the similar kernel controllable thing.
I thought you guys should know about these things as you are working with uvc.
for some webcams the led is hardware controlled, for some it is software
hverkuil: oops :-)
I'll fix that
fling: can I mention you as the bug reporter ?
that would take the form of a "Report-by: Name <e-mail address>" in the patch
I would need your name and e-mail address for that :-)
pinchartl, hverkuil, sailus: do you have time today for us to finish the MC discussions?
I have
I'm afraid I don't :-(
pinchartl: ok, when do you have a window for this discussion?
good question
I'll be in a plane on Monday
flying to the US west coast
so timezone-wise it might get a bit difficult
I could have time in the evening next week, but Hans will likely be asleep
next week may be hard for me as well, I may need to travel on Monday to the city I used to live, in order to sort some things on my old house
I'll also be in a training on Thru/Fri
Wed would likely be ok for me
let me check my schedule for Wednesday
I *might* have some time between 13:00 and 15:00 PDT (UTC-7)
Oslo will still be using winter time
so there will be a 8h difference
and 9h with Finland
so that would be 21:00 to 23:00 for Hans and 22:00 to 24:00 for Sakari
however I'm not sure how available I can be
I'll be attending a conference, but the topics during that slot aren't very interesting to me
but the schedule might change :-/
the other option is 22:00- for me
but that would be 6:00 in Oslo, I don't think it would work :-)
No, that won't work :-)
let me try a different approach then...
I'll write an e-mail to the ML with the Sailus summary, plus I'll add a summary of yesterday's discussins...
let's do that, yes
I'll then reply to my own e-mail with my own comments
eventually, we get this solved without needing another meeting
(or at least maybe we'll agree on the proper approach)
hverkuil: could you reply to that with the "upstream control" proposal ?
bbiab
pinchartl: will do
back
pinchartl, sailus, hverkuil: just sent the notes with my additions...
I think it is better for us to first review it, in order to check if everything is there...
agreed
if you have any comments, I'll generate a version 2 with your reviews
and then we can discuss over the version 2
pinchartl: is it soing to 4.0?
back
what is also a bit confusing is how to handle HDMI capture cards
the source dictates the resolution
Well you're in luck - there *aren't* any HDMI capture cards currently supported under Linux. :-)
especially mode changes are fun ..
the closest I saw was the hdpvr device, which seems to have some hdmi support
You should email Hans on the linux-media mailing list. These are all good questions, and as somebody who has done HDMI drivers under Linux I have similiar questions about how he expects those cases to work.
(in my case, the drivers didn't implement the new selection APIs because I wanted them to actually work with existing applications).
The HDPVR does *not* have HDMI support - it has HD component capture only.
devinheitmueller: I think hverkuil did some work on that, but for some hardware that it is used only internally
I'm using the new APIs on purpose, because our use case needs some hacky things, which can be done in a nicer way using these APIs
mchehab: it's entirely possible that he got it working on his internal card, but it's not clear that he's ever publicly discussed the expected behavior.
our hardware is also internal, but we may upstream the driver (needs some convincing and need some IP of external companies cleared)
devinheitmueller: yes
Steven and I were talking about this a couple of weeks ago since we're doing an HDMI capture card, and there were lots of scenarios that the API doesn't really appear to handle currently.
(again, it's possible has some unpublished expectation for how these edge cases should behave)
Oh, and of course there's no indication that *any* applications out there actually use the APIs, so it's not like you can start with the apps and extrapolate how the driver should be have.
ignoring cropping/scaling initially I would have s_fmt just return whatever resolution the device was set
it was unclear what g_fmt should do, should it return what dimensions are set into your internal driver structures or what the device is actually at?
devinheitmueller: well, as you're working on that, if you're meaning to upstream your work, use the ML to sanate the doubts and submit patches to improve the hdmi support
Thunderbird: that's essentially what I did, which was good enough for gStreamer.
mchehab: it's highly likely that none of this will go upstream. You've made it so difficult that it's not worth my effort anymore.
and if there is no signal, I had g_fmt return ENOLINK/ENOLCK
but it would be nice to get some of the interactions clarified, so I will write a detailed email with what I have done and the issues I'm running into
Thunderbird - I used ENUMINPUT to indicate no signal, but yeah that's probably good enough.
(in my case, I got it working with gStreamer, which is all I really cared about)
enum_framesizes is another fun one, should it just list your active resolution? hdmi capture you technically support arbitrary resolutions (limit is max pixel clock)
query_dv_timings is probably the best one to use now
for timings
devinheitmueller: I don't understand why you think this is so difficult
Thunderbird: most HDMI receivers support a fixed number of resolutions, typically what is mapped in their EDID info.
everyone else is submitting patches and getting them merged
in our case we don't really have a limit
Thunderbird: Ah, ok
mchehab: You only need to review the archives on Steven's attempt to submit the Viewcast 820e driver upstream for prime examples of all the bulls**t you guys are demanding which prevents good code from getting upstream.
I wonder how I would ever get my driver upstream, I think our device would be a misc device since it has video capture, custom GPIO and it acts like a storage adapter as well
mchehab: I'm not really going to argue with you on this again. You've dillusioned yourself into thinking things are working well and that driver quality is increasing. You're wrong on both counts but you're too close to the problem to see it for yourself.
devinheitmueller: steven seemed to not have the time to properly use the API
it was proposed to submit his driver at staging
mchehab: Yeah, let's REQUIRE Steven use APIs that are being in use by exactly zero drivers.
he didn't answer
He put three days into cleanup work, and it was still rejected. He gave up.
thanks I will prepare a good email, bbl
Thunderbird: Good luck!
devinheitmueller: the api is being used by some drivers under platform
Oh you mean with SOCs that nobody has access to unless you work for Samsung/TI/Freescale/whoever? And with proprietary applications that nobody has access to? Sure, ok.
devinheitmueller: the API should be the same
You want to demand that an API be adopted for new drivers - spend the time to demonstrate it working with a retail tuner.
Viewcast 820e is also a product that very few people have...
it is for professional usage
You would like to think so, but there are *plenty* of reasons why that would be wrong - mainly because the use cases had never been hit before and thus nobody knew what the answers should be.
also, you're free to comment/review any API
I tend to wait for a reasonable time before merging API patches to make sure that everyone interested can comment
Until somebody puts an API into actual use, that's when you find the edge cases nobody thought of.
I'm all to discuss the needs and improve the API
but sending a very big driver without discussing the API first is a big no
if such driver needs API, API should be discussed first
Looks at the wonder that is videobuf2 - desptie the *requirement* that Steven use of for the 820e, it wasn't being used for even a single standalone video capture device until *****I***** ported em28xx. Takes balls to demand somebody else shake all the bugs out of your API and not being willing to port a single tuner driver to it first.
as the API affects not only an specific driver
Steven intentionally didn't adopt the new API/framework because there were absolutely zero applications publicly available using them.
VB2 didn't start with em28xx
need to go
bbiab
That was the first piece of consumer hardware that supported it. It was totally unvalidated for a general purpose capture card until then. - which is pretty pathetic.
ttyl.