sailus: I wrote a conclusion mail. Pls. can you verify it? :)
tfiga: ping
tfiga: is the mt2701 JPEG ENC series (https://patchwork.linuxtv.org/cover/60461/) good to go?
tfiga:  and what is the status of mtk-isp? https://patchwork.linuxtv.org/cover/60880/
hverkuil: ping
pinchartl: pong
tfiga: re JPEG ENC: I got way too many checkpatch/compiler/sparse/smatch errors/warnings. I need a new version anyway.
pinchartl: if you are OK with it, then I'll take this patch: https://patchwork.linuxtv.org/patch/61110/
I have a similar one for stm32-dcmi, so it makes sense to merge both via the same PR.
neg: ping
hverkuil: yes I'm alright with that, thanks
sorry for the delay
I wanted to ask you
we don't have any API to pass HDMI infoframes to userspace, right ?
Not at the moment. I always wondered when that would become necessary :-)
I think the answer is now :-)
should we pass that through a metadata node ?
I think personally that that is overkill. Most InfoFrame data rarely changes, so controls seem the right approach.
Possible exceptions are large HDR-related infoframes.
('Extended InfoFrames')
Which InfoFrames are you interested in?
BTW, do you know if neg is around?
I'm interested in HDR10+ :-)
he should be around later today
hverkuil: pong and I will be around in ~20min
neg: ok. Just need to know if https://patchwork.linuxtv.org/cover/60580/ is OK to be merged.
for HDR10+ there are infoframes containing dynamic per-frame data. we need to ensure they can be carried to userspace efficiently, and even more importantly that userspace can match them with video frames
so I don't think a control would be a good idea to report that
possibly events if we can fit all the data in there
hmmmm no I don't think it will fit
HDR10 static metadata are 26 bytes long
but dynamic metadata is longer
I suspect the dynamic metadata won't fit in an event, but you can still use a control and use a control event to check when it is updated. As long as the control is updated before vb2_buffer_done is called you are OK.
Since the InfoFrame is always transmitted before the video frame, that should work well.
except if userspace is too slow and misses a frame
does the control framework support variable-size controls ?
No, and that can be a problem. Do you know the max size of the dynamic metadata?
no
it's specified in CTA-861-G
https://web.archive.org/web/20171201033424/https://standards.cta.tech/kwspub/published_docs/CTA-861-G_FINAL_revised_2017.pdf
page 200
I have it, I'm in the CTA-861 working group :-)
(which is page 206 of the PDF file)
it's specified using ITU-T H.265 syntax
Annex S, right?
Annex R and S
hverkuil: I think so yes
neg: thanks, I'll make a PR for it.
hverkuil: thanks
pinchartl: I tried to find if the worst-case size of the data in those InfoFrames was stated somewhere, but without any luck.
You'd have to work it out from the Annexes. Since the length of an Extended Infoframe is specified in two bytes, 65536 is automatically the upper limit.
that's a bit big for a control that would change per frame
in practice it should be smaller
absolutely.
but the size can vary per frame
pinchartl: how does the hardware handle these infoframes? Through a DMA engine, or just a large block of registers where the data is stored?
 hverkuil, pinchartl, I have exactly the same problem with Venus metadata (the size is big to be passed by controls on every frame). My heretic idea was to pass an fd through v4l_control and allocate the buffer through fresh new dma-heaps
narmstrong: ping
hverkuil: hi
Thank you for the new patch series!
I was curious about one thing:
v3 had an issue with the 'test-media vicodec' test failing. What was the fix for that in v4? The cover letter isn't very clear on that.
was it an issue in v4l2-mem2mem.c or in vicodec itself? (or both)
there are two options, a hardware FIFO, and a DMA engine
is the DMA engine only for these extended infoframes, or also for regular infoframes such as the AVI?
hverkuil: it was in both, i forgot a "!" in  v4l2-mem2mem.c and an EOS in vicodec in vicodec_decoder_cmd()
so it only triggered in use cases from test=media
hverkuil: I'm not sure yet
narmstrong: OK, so basically simple bugs and nothing that required an overhaul of the code.
hverkuil: yep
narmstrong: thank you, that's good to know.
hverkuil: my bad, I should have run more tests
pinchartl: OK. Note that it is possible for 2 Extended InfoFrames to be transmitted: a 'regular' HDR Dynamic Metadata and also a Graphics Overlay (see CTA-861.4).
The last one is just a single byte, but I'm not sure how your HW will deal with that.
hverkuil: the hardware is being developed so I'm not sure yet :-)
and we can thus take requirements into account
In any case, IMHO the HDR Dynamic Metadata should end up in a metadata buffer, everything else should use controls. The Graphics Overlay flag can be an exception and go through a control as well.
On the other hand, to be more future proof we should probably allow for more than 2 Extended InfoFrames in the future. They can safely be concatenated in a buffer by a DMA engine, as long as the buffer is large enough to hold them all.
Hello, I'm writing some documentation for the ffmpeg project for the v4l2m2m wrappers to interface with hardware encoders
We currently have an option that allows us to request a certain number of  capture/output buffers via the VIDIOC_REQBUF ioctl call
For a user, are there any trade-offs for selecting a larger / smaller number of buffers? I am correct in understanding that it doesn't affect latency?
Or is this more seen as debug option when implementing a driver
taliho: it is related to how quickly userspace processes captured buffers: if there is a lot of jitter, esp. if the processing time can exceed the frame period, then you need more buffers.
hverkuil:  This makes sense for a capture device if you want to avoid dropping frames
The same can happen with codecs (m2m devices)
but we are using we are using for example to decode a compressed h264 stream.. I always thought that if you fail to dequeu the capture buffers, you will not be able to enqueu packets on the output buffer
the output and capture buffers are independent.
Each queue can have its own number of buffers.
ok then it makes sense :)
thank you very much for your help!
My pleasure.
hverkuil: sorry one more question... Is there any disadvantage to always using the largest possible capture/output buffer offered by the device?
taliho: waste of memory. Video takes a lot of memory, so you don't want to unnecessarily allocate more buffers than is required.
Usually 3 or 4 buffers are sufficient.
codecs often need more buffers, but the driver will increase the number of buffers to the minimum required.
hverkuil: thank you, again!