#v4l 2020-05-19,Tue

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)

WhoWhatWhen
sailushverkuil: Heippa! [11:40]
hverkuilsailus: hi! [11:40]
***mort has quit IRC (Remote host closed the connection) [11:40]
sailusThe ov5647 driver's maintainer has gone away.
I guess I'll send a patch to mark it orphaned.
The driver is receiving patches, though, so there would be fair chances of getting a new maintainer.
[11:43]
hverkuilChanging to orphaned is OK. Or perhaps become maintainer and set it to Odd Fixes. I've done that in the past for other drivers. But I think that only makes sense if you have hardware with which you can test. [11:46]
sailusI don't. So I'll send a patch to orphan it.
That is also a good hint a new maintainer would be welcome. :-)
[11:50]
..... (idle for 22mn)
kbinghamsailus, I think we plan to make some large changes to that driver soon. We discussed that with RPi, and I believe I recall they think that the mainline kernel driver doesn't even work currently...
sailus, And thankyou for going through the gmsl patches. I think given how late we are now, and we still need to get some DT improvements done we're going to miss 5.8 now. But now at least I think we have some confidence for 5.9 at last ;-)
5.8 might have been a good target if we assume it might be an LTS ... but ho hum.
[12:12]
hverkuilndufresne: for our upcoming stateful encoder discussion I've rebased the 5 patches and made them available here: https://git.linuxtv.org/hverkuil/media_tree.git/log/?h=stateful-enc
(these are the 5 patches from the '[PATCH 0/5] Stateful Encoding: final bits' series posted Nov 11 last year.
[12:17]
ndufresnehverkuil: thanks, mind if we delay 15minutes, will give me time to read through first ? [12:23]
hverkuilsure, no problem.
Reading through the email thread for '[PATCH 0/5] Stateful Encoding: final bits' was most useful for me.
ndufresne: just ping me when you are ready.
[12:24]
sailuskbingham: You're welcome.
kbingham: If someone uses ov5647, there
There's a good occasion to become a driver maintainer here. :-)
[12:27]
.... (idle for 16mn)
kbinghamsailus, well I've got a couple already. Seems dave wants it anyway ;-) [12:44]
sailuskbingham: I wouldn't mind more maintainers / reviewers, but one is enough. [12:57]
ndufresnehverkuil: ok, I'm back [13:11]
hverkuilndufresne: it's a good moment to see if this can be finalized. [13:12]
ndufresneindeed, from my reading a lot seems covered, I only noted few things [13:12]
hverkuilBased on past discussions I think I'll drop V4L2_CID_MPEG_VIDEO_ENC_FRAME_RATE and instead use S_PARM to set the encoder framerate.
It just needs to be documented.
[13:13]
ndufresneyes, I think it's better, smaller of a change, and more likely compatible with existing userspace [13:13]
hverkuilI'm uncertain about the V4L2_BUF_FLAG_TOO_SMALL error flag. It's useful to indicate that the buffer is too small, but without the capability of freeing existing buffers it's kind of useless. [13:14]
ndufresnehverkuil: I noted one ambiguity, there is paragraph that says that queuing 1 raw image, maybe lead to multiple (or no) done buffers on the CAPTURE, but it says for reordering only
I'll come back to TOO_SMALL flag right after
in H.264, interlaced frames are most of the time interlaved in 1 buffer, but can be encoded in 1 or 2 buffers
the natural streaming unit is an AU, and an AU can be a frame or a field
so my proposal would be to add soemthing like "or when the encoded format semantic suggest so" or something in that line
[13:14]
hverkuilwhich paragraph are you referring to, exactly? [13:17]
ndufresnea buffer queued to ``OUTPUT`` may result in more than one buffer produced on
+ ``CAPTURE`` (if returning an encoded frame allowed the encoder to return a
+ frame that preceded it in display, but succeeded it in the decode order),
[13:17]
hverkuilah, ok. [13:18]
ndufresnethe statement is useful, I think the () bit is too strict
In general, I'm prone to suggest that AU alignment is best for H264, and that a codec specific control can be used to control that when possible
[13:18]
hverkuilWhat does AU stand for? You mentioned it in the past, but I forgot :-) [13:20]
ndufresnean AU means access-unit, it's all the element composing 1 decode operation
well, no, one frame decode
it's a bit vague, so to clarify, it can be a frame (all slices of a frame), a field (top or bottom), but it can also be an optional sub-layer (aka enhancement deltas)
[13:21]
hverkuilHow about just change the text in () to:
for example, if returning an encoded frame allowed the encoder to return a frame that preceded it in display, but succeeded it in the decode order; but there may be other reasons for this as well
[13:22]
ndufresnethat would be good [13:23]
hverkuilThe key thing is that multiple capture buffers can be produced, and we just give an example. [13:23]
ndufresneyes, I think the idea was just to discourage implementation that queue(OUT), wait(CAPTURE) in lock steps [13:24]
hverkuillet me make this change... [13:25]
ndufresneideal implementation will use asynchronous flow, a select/poll/epool in a thread, or some higher abstractions like futures in specific languages
hverkuil: ok, I'll give you few minutes, then we can try and solve the issues with TOO_SMALL
(or the non issues, I don't know yet, I forgot a bit our discussions in the regard)
[13:25]
hverkuilOK, the new text reads:
"(for example, if returning an encoded frame allowed the encoder to return a frame that preceded it in display, but succeeded it in the decode order; however, there may be other reasons for this as well)"
It's a bit awkward, but it's at least better than it was.
The idea with the TOO_SMALL flag is that userspace is informed that the capture buffer was too small. The idea is that userspace allocates new larger buffers and queues those up. But that only makes sense if userspace can also free the too-small buffers it allocated earlier. And we don't have that functionality.
Without support for that, all the flag does is to expose the reason for why the buffer was returned in state ERROR.
[13:30]
ndufresne(reading) [13:34]
hverkuilI don't think this is a prerequisite for getting the stateful encoder documentation merged (it can be added later).
And in any case you would need to have a driver that actually uses this vb2 error state.
[13:35]
ndufresneok, that was my understanding currently, that you could not fix it for the current frame, but could at least proceed with reset operation, choosing larger buffers
I think the worry one would raise is that this might not be sustainable long term method to do error reporting
as there is a limited number of flags available
hverkuil: maybe it's safer to remove that for now, and come back with support for error codes ?
I don't know how errno is delivered to userspace, I have never learned that, but it would be an option, or we could have erro code control too
might require a bit of plumbing, as the code need to stick with the buffer and be expose on the DQBUF operation
[13:36]
hverkuilI agree that it is safer to remove it for now. But I am not sure we need error codes: in all the years we used V4L2 this is the first time we need a specific reason for the ERROR. In general you just skip the frame or give up, but this case is special, i.e.: it is recoverable by userspace (except for the missing API to delete buffers!)
Supporting actual error codes is something that can be done in a future v4l2_ext_buffer struct.
[13:41]
ndufresneok [13:43]
hverkuilSo I think the best way forward is to document S_PARM for stateful encoders (stateless as well?), and then we are ready.
Drop everything else that's in that 'final bits' series.
[13:44]
ndufresneI cannot say for stateless encoders yet
the stateless encoders are basically fixed QP encoders that do not produce the headers
[13:44]
hverkuilI'll just be explicit that it's for stateful encoders. [13:45]
ndufresnesafer
I think stateless encoder do not specifically need timing information
[13:45]
hverkuilI'll see if I can make a new version today or tomorrow and post it.
It's been on my TODO list for ages, time to finish this.
[13:46]
ndufresnethanks a lot, and if I can help in anyways, let me know
I think with these two changes, we have a safe implementation, and we can enhance it later with real implementations
I'm also learning that it's not because the HW can do something that it will ever be useful
[13:47]
hverkuilOne more thing: we need ENUM_FRAMEINTERVALS support as well since userspace needs to know the min/max framerates. What do you think? [13:48]
ndufresneThis is a bit per-codec though [13:49]
hverkuilENUM_FRAMEINTERVALS? [13:49]
ndufresnefor H.264/HEVC, it's already covered by the level
and in many implementation, going higher then spec may work, but not in real time
offline transcoding slower then real time is also a use case
hverkuil: but I would not disallow it, as there might be a limit linked to the number of bits in the registers
[13:49]
hverkuilRecommended, but optional? [13:52]
ndufresneYes, let's do that
in gstreamer, folks rarely have a rate adapter in their pipeline, but if they do, and enum-frameintervals is implemented, then the rate adapter will be able to negotiate a supported rate, so having it may help for few niche cases
in gst notation, that would be videosource ! videorate ! encoder ! ...
[13:53]

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)