#v4l 2020-02-07,Fri

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)

WhoWhatWhen
grohnesailus: Onsemi MT9M024 and AR0141CS
sailus: I think I could even post a dysfunctional driver for them. they have some kind of "firmware". most other media/i2c drivers simply embed their "firmware" as hex sequences. the current state of affairs is that I don't have legal clearance for the "firmware" part
sailus: similar to adv7170.c init_NTSC or adv7175.c init_common. still without such parts the driver doesn't work
I'm not sure if posting an RFC driver with certain parts /* censored */ would be good or bad. :-/
entirely independent of my work, Eng-Hong SRON of parrot.com created a very similar out-of-tree driver for MT9M021
https://github.com/parrot-opensource/disco-opensource/blob/master/sources/linux-3.4.11/linux-3.4.11/drivers/media/video/mt9m021.c
it also has "firmware" in mt9m021_seq_data
[07:02]
............ (idle for 57mn)
sailusgrohne: Feel free to post an RFC set with some bits omitted. [08:08]
..... (idle for 21mn)
paulk-leonovndufresne and others: do you know if emulation prevention bytes i h.264 are tied to the annex-b bytestream format or if they can also be found in NAL unit streams? [08:29]
.................................................... (idle for 4h17mn)
ndufresnepaulk-leonov: from what I've seen, they are present on AVCc streams too, not sure why though, might be tided to the decoding process
do you encounter any issues related to that ?
[12:46]
..... (idle for 23mn)
hverkuilI'm preparing another PR with regression fixes for v5.6: does anyone know of important patches (not already marked as being for v5.6) that should be added? [13:09]
..... (idle for 24mn)
paulk-leonovndufresne: no issue in particular, was just wondering :) [13:33]
................. (idle for 1h21mn)
talihopaulk-leonov: ndufresne: annexb just adds the startcode 0x000001
emulation prevention 0x03 bytes in the nal units
[14:54]
paulk-leonovokay :) [14:55]
talihothey are removed when you extract rbsp_bytes from the nal unit [14:55]
ndufresnewhich in our case is done by the hardware/firmware [14:56]
paulk-leonovI was asking since the hantro encoder doesn't seem to produce the start-code
and I wondered whether just adding it is enough
so the answer should be yes
[14:58]
ndufresneit should be yes, are you working on Hantro encoder ? [14:58]
paulk-leonovndufresne: indeed I am
I need to write a summary email about my findings
[14:58]
ndufresneok, let ezequielg know, as we already started some analyses
paulk-leonov: do you have google v4l2 plugin and driver implementation ?
[14:59]
paulk-leonovyeah I've looked at them
and mpp too
[15:00]
ndufresneif you compare Hantro to NVIDIA, AMD, and to Intel Shaders, the main difference is that Hantro does not come with a firmware dealing with bitrate control [15:00]
paulk-leonovindeed
rate control is going to be one of the painful points
(not the only one, sadly)
[15:00]
ndufresneso you have to implement your own, but as you probably seen, they seem to get away with a very simplistic implementation, works for WebRTC for sure [15:01]
paulk-leonovthe main issue seems to be that the rate control parameters are highly hardware-specific [15:01]
ndufresnethe parameter for controlling that is what we aren't sure can be generalized, our current status is that without another similar HW, we are a bit stuck [15:02]
paulk-leonovI've looked a bit more at the allwinner encoder too
and there's rkvenc in mpp
[15:02]
ndufresneso allwinner does have an encoder ? I wasn't certain [15:02]
paulk-leonovit does, yes [15:02]
ndufresneright, RK encoder seems like a clone of Hantro, it usually works the same, but with different register layout [15:03]
paulk-leonovndufresne: ah?
I thought it was a different IP
ndufresne: I'm not talking about vepu1/2
which are indeed hantro
there's another IP used in some socs
[15:03]
ndufresneyes, after doing the decoder, our feeling is that RK wrote their own to reduce licence cost, but they largely copied the interface of Hantro in order to keept he same software underneath [15:04]
paulk-leonovI see [15:04]
ndufresneif you look at RK history, it starts with hantro, and then hybrids, I bet long term there will be only RK chips there [15:04]
paulk-leonovnote that they also have an h265 encoder, but it seems to be stateful (requires a firmware anyway) [15:05]
***benjiG has left [15:05]
ndufresneon which RK ? [15:05]
paulk-leonovndufresne: RK3228H and RK3328
I've made this document to keep track: https://leonov.paulk.fr/collins/~paulk/paste/index.php?paste=ac34a93deb5e05ca924498673026d655&raw=true
[15:06]
ndufresneah, no idea about these, on RK3399 it was shared with VP9 and both seems stateless [15:06]
paulk-leonovfor the encoder?
I thought rk3399 didn't have an h265 encoder
[15:09]
ndufresnedo they have HEVC encoder on RK3399 ?
yeah, I don't think they had that
[15:09]
paulk-leonovright
I think it's the rkvdec of rk3399 that also does vp9 and h265
since it doesn't seem to have a G2
[15:10]
ndufresneno G2 indeed [15:11]
paulk-leonovah also, new (starting with H6) allwinner platforms seem to include the vp9-only G2
that google is apparently giving away for free
[15:11]
ndufresneah, that will simplify, didn't jernej said the VP8 accelerator was largely inspired of G1 ? [15:13]
paulk-leonovcould be, I don't remember precisely [15:13]
ndufresnefor Amphion Malone, I believe these are are all firmware driven [15:13]
paulk-leonovokay, I didn't find that much info about it [15:14]
ndufresneMXC is the name of the firmware subsystem iirc
I believe it handles multiple HW
they got v4l2 drivers downstream already
but usually hides it in their abstraction library to support the move from i.MX6 to i.MX8
[15:14]
paulk-leonovhehe [15:16]
ndufresnethey reportedly got GStreamer working directly, with minimal changes, I mostly agree with the changes but they never followed up the code review
we will be working on Hantro VC8000 possibly this year, so we'll be able to shed some light if this is similar to the already known G1/2 H1/2
[15:16]
paulk-leonovhuh, didn't even know about this hantro linew [15:19]
ndufresnepaulk-leonov: did you check if the slice headers are generated by the HW ? (I bet they are, but just to double check) [15:32]
paulk-leonovndufresne: yes, they are [15:32]
ndufresneok, that match VA-API on that front [15:33]
paulk-leonovwhat we get is the full NALU without start code [15:33]
ndufresneso we have to craft the PPS/SPS SEIs, and the encoder take care of the NALU/Slice
what strange here, and we'll need to discuss with hverkuil pinchartl and others, is that crafting these is relatively straighforward, bitrate adaption is likely less complex then scheduling algo, or the kernel threading
so I'm not sure why encoder cannot use the stateful interface, a bit like we do for JPEG encoder
[15:33]
paulk-leonovmhh, so the kernel-side would generate those? [15:35]
ndufresnefor parsing, we have compelling argument not to, but for encoder, so far it feels the only argument wasn't that strong [15:36]
paulk-leonovI assumed that was off the table, but it could help [15:37]
ndufresneI don't know, discussion must happen I believe, specially that hverkuil had put in de-staging requirement to have an encoder [15:37]
paulk-leonovright [15:38]
hverkuilhow to handle stateless encoders is still up in the air, so if there are good arguments, then that's something that can be discussed. It's certainly not decided one way or another. [15:38]
ndufresneThat awkward, because if you look at competitor, like Microsfot, they don't do stateless encoder, only decoders [15:38]
paulk-leonovI'll try to craft an email without waiting too much
anyway to me the biggest point is how to handle rate control
[15:38]
ndufresneyes, and if it's HW specific, having it in kernel make things a lot simpler, but it does have a impact on vendor, they can't complete on the quality of their secretive bitrate adaption algorithm unless they have a firmware blob [15:39]
hverkuilAnd while I said that an encoder is a requirement for de-staging, it is actually a bit more subtle: we need to have a good understanding of how to handle stateless encoders and the impact on the stateless decoder API. [15:41]
paulk-leonovunderstood [15:41]
hverkuilIf it is clear that stateless encoders and stateless decoders are different beasts, then we can start de-staging. [15:41]
ndufresneok [15:42]
hverkuilGot to go, sorry. [15:42]
ndufresnethe term stateless still hold for sure, this HW have no idea about previous frames that have been encoded appart from the frame we pass as reference
paulk-leonov: correct me if I'm wrong, but G1 H264 is lineara reference only, you the reference frame is actly the last N frames, except on IDR boundary right ?
[15:43]
paulk-leonovndufresne: my understanding is that it can take 1 reference frame for P slices
which is the reconstruction buffer
of the previous frame
[15:45]
ndufresnejust one, wow, that simplifies a lot [15:45]
paulk-leonovyeah, that's another pain point
I guess a generic interface shouldn't asssume that only 1 referenc eis possible
and one problem is that it's the reconstruction buffer that needs to be kept, not the source (output) buffer
currently the cros driver will just use two buffers, one for reconstruction and one for reference and just alternate them
[15:45]
ndufresneright, so you allocate a third set of buffers for the reconstruction, buffers used by the encder to decoder into, and then use a reference [15:47]
paulk-leonovexactly [15:47]
ndufresneI bet if you had b-frames, the encoding order is straight, so typically, and it's the encoded bits that are being reordered
I need to check VA-API again I suppose, to understand what we do there
and if B-Frames are supported
[15:48]
paulk-leonovmhh, wouldn't that miss references?
I suppose you'd still need to provide the reference sources first (including the one that comes after the future B frame)
(which is probably not very good for real-time streaming and why it's not implemented)
[15:50]
ndufresneI wrote something on a bug about this recently .. need to find it back [15:51]
paulk-leonovmy feeling is that hardware encoders are meant for encoding sources that are naturally ordered (e.g. cameras)
(in embedded systems)
[15:52]
ndufresnedepends what you define real-time, for Live Streaming, B-Frame helps staying close to your CBR defined rate, so it helps reducing waste on the TS mux [15:52]
paulk-leonovah right, so it's a quality vs latency tradeoff I suppose? [15:52]
ndufresneyes, B-Frames introduce latency, cause you will stream it first, so each time you have a b-frame, there is no render, on live streams, this is 1 frame latency
in theory, you could catch up when you present the B-Frame, but that's not what you want to display
[15:54]
paulk-leonovright
okay gotta go, let's try and get a discussion started about this soon-ish :)
btw I found a register layout for the allwinner encoder, which gives some indications
apparently it can do B frames
[15:55]
ndufresneok, that's good somehow, we need information [15:57]
paulk-leonovhttps://github.com/allwinner-zh/media-codec/blob/master/sunxi-cedarx/SOURCE/vencoder/include/video_enc_reg.h [15:57]
ndufresneah nice, the tiling isn't a mistary, they have a software detier for some reason (assembly optimized)
ah, that's because of how Android Media Codec work, you need to always offer some YUV 4:20/4:2:2 linear / cpu access
[15:59]
paulk-leonov: seems full features, there is baseline/main/high profiles, up to level 5.1
I bet it's constained-baseline only
supports min/maxQP, fixQP, intra refresh, crop, CALVC and CABAC
key frame interface or course, note sure what coding mode exactly mean, it's frame or field
got up to 3 ROI region
[16:08]
paulk-leonov: I went through the code, was wondering if they where using some extra HW parser or not, and found a nalu parser here, so they probably driver it stateless
https://github.com/allwinner-zh/media-codec/blob/master/sunxi-cedarx/SOURCE/plugin/vdecoder/h264/h264_nalu.c
[16:15]
paulk-leonov: that's a good source of info indeed ! [16:26]
.... (idle for 17mn)
pinchartlhverkuil: ping [16:43]
ndufresneezequielg: bbrezillon: how does the capture pixel format setup works atm ? is it currently always hardcoded by userspace or does the driver do anything ? e.g. for 10bit/8bit and for native vs ipp ?
I was thinking to use enum_fmt, consider first formats to be native (so prefer them first), when if I got a 10bit bitdepth, prefer a 10bit format, otherwise pick a 8bit format and chross fingers
[16:55]
....................... (idle for 1h54mn)
ezequielg: bbrezillon: Another question, if the sps and scaling matrix haven't changed, so I still need to set a control with that info for my current request, or can I assume the previous one will be picked ?
in ffmpeg, you seem to set everything all the time, but considering is double copy, that looks like a small overhead
[18:50]
...................................... (idle for 3h7mn)
***joek has left [21:57]
jkale has left [22:08]

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)