mchehab: hverkuil: do you have the si2168 datasheet? Thanks
syoung: no, sorry.
hverkuil, we are trying to understand what was the rational for some code, which was done after some discussion with you apparently, https://git.linuxtv.org/media_tree.git/tree/drivers/staging/media/rockchip/vpu/rockchip_vpu_v4l2.c#n226
Basically, "ignoring" capture queue would prevent userspace from allocation larger capture buffers
Why should the kernel prevent from doing that ? Userspace knows better which size the buffers should be, maybe we have an anaptive stream, between 480p and 1080p, and want to allocate to 1080p ?
ndufresne: the OUTPUT format is effectively the source of the CAPTURE format. It's like capturing from a sensor: unless the hardware can scale or compose, you are stuck to the sensor resolution. If you want larger buffers, then use VIDIOC_CREATEBUFS.
hverkuil, that only means you need a minimum buffer size
Changes to this would require a larger overhaul of how formats are used. That's something for the future (some preliminary work has already been done).
why, it's just a random limitation added in software ?
if you patch all drivers to enforce something, it will be much hard later to fix it, so why do we do that in the first place ?
also, why do we need to set the width/output on the output side of a stateless decoder ?
isn't the information in the PPS instead ?
(expressed in a codec specific way)
anyway, all this seems like arbitrary restrictions to me, I just can't make any sense of it atm
it's like if we like to enforce how the driver will be used, instead of actually exposing the HW capabilities leaving userspace code to some creativity ...
The spec defines that sizeimage is set by the driver. We now allow it to be set by userspace as well for compressed formats, but changing the behavior for raw formats might very well have unforeseen side effects since applications might have random values filled in for sizeimage. So that's not going to change with the existing API.
this has nothing to do about it
the size image on capture is calculated from the width high of the same queue
the spec has no notion of internal queue relationship
which spec? The stateless decoder spec?
I thought you refer to the main spec
all this makes me feels we are running toward a wall
The interaction between output and capture formats is part of the decoder spec (or really all mem2mem devices), since for those the two queues do have a relationship.
I think that's where we are wrong, we cannot define this relationship
What that relationship is can be changed: the stateless decoder spec is still RFC.
it's not generic at all
Just an example, you said "unless the hardware can scale or compose"
Saying that there is no relationship is also an option. I vaguely remember a past discussion of keeping the capture and output formats completely separate.
well, turns out that yes, all Hantro base decoder comes with an IPP that has limited scaling and colorspace conversion support
we don't enable it, because we need to get something working first, but I believe pH5 will be interested
hverkuil, I'm wondering if we didn't make a mistake with reusing the format state to transmit the new format on format changes ...
basically, you want to know what the new solution will be, but could avoid re-allocation if you have enough buffers already, and that these buffers are large enough
or maybe it's just this issue that allocation / queue / format is too coupled together ...
The missing piece in all this is that there is no clean way of asking the driver what the size of the buffer should be for a certain resolution.
That's an API problem that's been known for a long time.
Everything else is fine (although the stateless decoder spec probably needs a bit more work).
Sorry, dinner time. It's easier for me (given the timezone differences) if you post your concerns to the mailinglist.
svarbanov: there are several patches for venus at patchwork. Please don't let to send them to me late at -rc cycle
hverkuil: is there any particular reason to vivid/Makefile uses vivid-objs instead of vivid-y? I saw that others media driver does this as well. When reading Docs/kbuild/makefiles.txt, it's looks like the most intuitive approach to use vivid-y
tonyk: it's all copy-and-paste :-)