[01:48] <stdint> ndufresne, svc is none about the hardware support [02:27] <tfiga> ndufresne: SVC is a VP9 thing, isn't it? webrtc can still use simulcast with other codecs regardless of that and without any special hardware support [02:28] <tfiga> I believe it can actually also convert separate VP9 streams into one SVC stream too [03:02] *** cybrNaut has quit IRC (Ping timeout: 268 seconds) [03:32] <stdint> I think H.264 also support SVC [03:32] <stdint> but it is really not necessary for the hardware to care about this problem [06:56] *** faction has quit IRC (Read error: Connection reset by peer) [07:15] <tfiga> stdint: You're right. WebRTC doesn't seem to support it for H.264, though [07:16] <tfiga> Although you could still have hardware support for generating SVC streams on the fly. [07:16] <tfiga> Not strictly necessary to use the feature, though. [07:16] <stdint> I am not sure about the encoder part [07:17] <stdint> but for the decoder, the simple way is to drop that SVC slice, anyway there would be a new I slice for the next sequence [07:18] <stdint> so it won't break anything even it is little loss in visual [07:18] <stdint> but for the encoder, if you are not talking about those devices with firmware [07:18] <stdint> it is just some additional work in stream header [07:19] <stdint> the only problem is how to generate such a pixel data for the SVC slice [07:19] <stdint> or I should say SVC frame [08:01] <svarbanov> tfiga, stdint, I think that ndufresne is right that very few hw encoders supports SVC for VP8/9. The situation with h264 is better IMO. [08:03] <stdint> svarbanov, I didn't check the On2's, but the face is most of the encoders on the market doesn't support VP8/VP9 [08:03] <stdint> if it is not the google, I am sure nobody want VP8 or VP9 [08:04] <stdint> on2 never achieve its promise [08:05] <svarbanov> stdint, everyone wants AV1 ;) I'm not sure about this statement royalty free codecs are always better [08:06] <stdint> not sure whether it would become the next VP serial [08:06] <stdint> ayaka may have some royalty in this problem but I am not [08:07] <stdint> there is a landmark, I wonder whether the Japanese broadcast system would adapt AV1 standard [08:09] <stdint> as I am sure the American film association would only use ITU codec [08:59] *** aballier has quit IRC (Ping timeout: 245 seconds) [09:54] *** Muzer has quit IRC (Ping timeout: 264 seconds) [10:08] <hverkuil> tfiga: it looks like we are almost done with the encoder spec. The only outstanding issue is whether setting the OUTPUT format should change the CAPTURE format. [10:10] <hverkuil> Note that the patch that allows userspace to choose the sizeimage value in S_FMT never made it to mainline. Please reply to my last reply to svarbanov. [10:11] <hverkuil> svarbanov: if you can look at my reply as well? I think we need this for the codec API. [10:11] <svarbanov> hverkuil, I'll look, thanks for reminder [10:46] <sailus> jmondi: Heippa! [10:50] *** _jmleo has quit IRC (Ping timeout: 245 seconds) [11:48] *** nsaenz has quit IRC (Read error: Connection reset by peer) [12:03] *** indy has quit IRC (Remote host closed the connection) [12:23] <jmondi> sailus: Hey Sakari! [12:32] <jmondi> sailus: I was about to send v5 of the v4l-mutliplexed series, but I wanted to include the CSI-2 data lane negotiation on top... what do you think, whould I send v5 soon and the additional 5 patches (from: "[RFC 0/5] media: Implement negotiation of CSI-2 data lanes") separately? [12:41] <neg> hverkuil: ping [12:50] <hverkuil> neg: pong [12:52] <neg> hverkuil: There are a few rcar-csi2 patches delegated to you in patchwork ready to be consumed, I was wonder if you was waiting for me to collect them and send a PR or if you prefer to handle that yourself [12:53] <hverkuil> I'm handling patches today, so I'll try to include yours. [12:53] <neg> thanks [12:53] <hverkuil> I see 6 patches from you delegated to me, correct? [12:55] <neg> Yes and all but '[v3] rcar-csi2: Propagate the FLD signal for NTSC and PAL' should be ready for consumtion [12:55] <neg> thanks for including this and let me know if I can help in any way [12:59] <hverkuil> koike: ping [13:02] <hverkuil> mripard: ping [13:04] <mripard> hverkuil: pong [13:05] <hverkuil> mripard: regarding your v8 h264 series: I saw some comments after you posted that about v7 of your series. But it is not clear whether a v9 is needed based on those comments. [13:05] <hverkuil> If not, then I wanted to make a pull request for it. [13:06] <mripard> hverkuil: as far as I understood, the comments on the v7 were more about future improvements to the API in general [13:06] <mripard> (so both MPEG2 and H264) [13:07] <mripard> and it wasn't really done [13:07] <mripard> (meaning that tfiga was arguing for that change, and ndufresne against [13:08] <hverkuil> OK, I'm going to merge v8. [13:10] <mripard> awesome [13:10] <mripard> thanks! [13:15] <hverkuil> mripard: Why is it V4L2_PIX_FMT_H264_SLICE_RAW instead of V4L2_PIX_FMT_H264_SLICE (as per V4L2_PIX_FMT_MPEG2_SLICE)? [13:17] <mripard> hverkuil: iirc that was asked by ezequielg and ndufresne to differentiate between slice with and without start codes [13:17] <mripard> https://lkml.org/lkml/2019/2/11/1785 for the discussion [13:18] <hverkuil> It doesn't look like the documentation for the format mentions that it is without start codes. [13:18] <hverkuil> I think that should be added. [13:19] <mripard> I [13:19] <mripard> *I can send a patch for that if you want [13:19] <hverkuil> Please do. [13:19] <hverkuil> I think it is important that this is documented. [13:23] <paulk-leonov> woops, looks like I totally missed that SLICE_RAW thing [13:23] <paulk-leonov> I don't think it makes sense to have that without the matching mpeg-2 counterpart [13:23] <paulk-leonov> hverkuil, can we change the definitions later like for controls? Or maybe pull these pixfmts to the non-public API [13:24] <paulk-leonov> it looks like it'll need some rework, but better do that after merging h264 [13:24] <hverkuil> Then you need to pull it out of the uapi. [13:24] <paulk-leonov> can we do that (and probably for mpeg-2 too)? [13:25] <hverkuil> too late for mpeg2, it's already in the uapi. [13:25] <sailus> jmondi: I think it'd be easier to keep the set from growing. [13:25] <hverkuil> for h264 (not merged yet) it's not a problem. [13:25] <sailus> I'd like to discuss you semantics. But I'll just need to leave the office. [13:25] <sailus> Let's see if we could discuss later, perhaps tomorrow morning? [13:26] <paulk-leonov> I'm a bit surprised the decision was to make SLICE and SLICE_RAW when we could really just always require the start code and such and indicate where the raw data starts with an offset [13:26] <paulk-leonov> which is IMO what we should do for mpeg-2 too [13:28] <jmondi> sailus: tomorrow morning I might be available early-ish and then busy from 10am CEST on for a few hours [13:30] <paulk-leonov> sorry I am reacting to this just now -- I had it at the back of my mind that mpeg/h264/hevc will need to be modified later anyway so we might as well merge it with design flaws [13:30] <paulk-leonov> but since the pixfmt is in the public API, we have to be very careful about that part now [13:31] <paulk-leonov> ... or move it to the non-public API and do the right thing later [13:31] <hverkuil> mripard: found a few remaining issues, but there is clearly some discussion on the pixfmt name as well, so I'll wait for a v9. [13:32] <hverkuil> BTW, the Makefile patch in your series didn't apply cleanly, so you want to rebase anyway. [13:32] <paulk-leonov> hverkuil, I think it would be preferable to not wait for the pixfmt discussion [13:32] <hverkuil> Darn, I had hoped to merge it today :-( [13:32] <hverkuil> paulk-leonov: it's OK to move it to a private header. [13:33] <hverkuil> h264-ctrls.h [13:33] <paulk-leonov> great ;) [13:33] <paulk-leonov> :) [13:33] <mripard> will this still be 5.2 material ? [13:33] <hverkuil> mripard: sure [13:34] <hverkuil> I'm not sure whether the pixfmt should be documented, esp. if it can change. Please put the pixfmt documentation in a patch of its own, that way it is easy to skip it. [13:35] <hverkuil> It should definitely have a similar note as for the codec controls that it can change in the future. [13:39] <koike> hverkuil: pong [13:40] <hverkuil> koike: did you review the vimc patch series from André Almeida? I would prefer to have your Ack or Reviewed-by. [13:41] <hverkuil> pH5: ping [13:47] <neg> koike: hi, do you have a few min to talk about vimc ? [13:49] <koike> hverkuil: not yet, I'm planing to review it this week [13:49] <koike> neg: yes (I just need 20 min) [13:51] <hverkuil> koike: OK, then I'll skip reviewing that series until you give the OK. [13:51] <neg> koike: thanks, let me know when you are ready :-) [14:00] <hverkuil> neg: one patch didn't apply. Can you rebase and repost? I mailed you the details. [14:02] <hverkuil> pH5: I merged most of your 10 part coda series, except patches 7 and 10. The first because I think you should just remove ENUM_FRAMEINTERVALS support, the last because I think we need v4l2-mem2mem helper functions for try_en/decode_cmd ioctls. [14:03] <hverkuil> I'm waiting for the next version of the stateful en/decoder specs before I implement those helpers. [14:04] <koike> neg: I'm ready [14:05] <neg> hverkuil: sorry about that yes I now see it indeed depends on the patch which is not ready, will reorder them and resend. Thanks for pointing this out [14:07] <neg> koike: I'm interested in running both the "Raw Capture 0" and "RGB/YUV Capture" devices at the same time with both of them connected to "Sensor A" in the graph [14:08] <koike> neg: right, this is not supported in vimc atm, but I don't think it is hard to implement [14:09] <neg> I had a go at it and failed :-( So I was wondering if this was something which was planed to be supported or if I misunderstood and it should not be possible [14:09] <neg> Then I know it's something that could be supported if someone does the work to make it happen, thanks ;-) [14:14] <koike> neg: I had this in mind when I started the code actually, but we dropped because the API didn't support it (iirc, the whole graph is marked as busy by the media core), and it was easier to propagate configurations from the capture to the sensor if there was a single path possible. [14:16] <koike> we need to reformulate how this propagation works [14:17] <neg> koike: I think I solved the media graph issue, my problem was with vimc-streamer and ved_pipeline in struct struct vimc_stream [14:18] <neg> When I stopped one vdev it blew up and I did not have time to lookinto it more at the time. Now I know it's something you already thought of and want to support I might have another go at it [14:19] <koike> neg: if you are using both capture nodes at the same time, they can be outputting different v4l2 pixel formats right?. There is a problem, because the sensor generates the output in the expected format in the capture. So I guess we could have two vimc_stream objects, one per capture, and the sensor would generate two images, one for each stream [14:23] <neg> koike: oh that is an idea, I never thought about that [14:23] <koike> and having a vimc_stream per capture is nice actually, because when we have the image generation optimized (being generated directly in the capture instead of being processed in each node), the tpg can just generate images from the vimc_stream struct [14:23] <koike> neg: the only thing is that the image going to both capture nodes won't be exactly the same, but it is a problem? [14:24] <neg> koike: I don't know, for my intended usage it's not a problem but there might be others [14:25] <pH5> hverkuil: thank you, I'll try removing ENUM_FRAMEINTERVALS. agreed about the try_en/decode_cmd helpers. [14:25] <koike> neg: well, it is easy to make the same image if they have the same pixel format. in any case we could try to do like this first and add a TODO for later [14:29] <neg> koike: sounds good, thanks for your help [14:29] <neg> hverkuil: rebased patch sent [14:32] <koike> neg: np, let me know if I can help with anything else [14:56] <hverkuil> koike: what do you mean with "the only thing is that the image going to both capture nodes won't be exactly the same"? [14:56] <hverkuil> in what respect do they differ? [14:59] <koike> hverkuil: if you call tpg twice, it generates different images no ? (iirc there was some noise in the first line on purpose, is that random?). [15:03] <koike> hverkuil: so if we call tpg fill buffer twice, one for each capture at a given time, the same image won't reach both captures (unless if this is the expected behaviour) [15:05] <hverkuil> If the test pattern is moving, then you'll get different images. But if you have two captures going on that come from a single source, then you can do that manually. We might need an extra function in v4l2-tpg.h for that. Basically the mv_hor/vert_count fields in the tpg need to be provided externally (globals in the vimc state). [15:07] <hverkuil> Hmm, I think the movement is in pixels, so the test patterns will move at different speeds depending on the scaling. Some more work would be needed in the tpg to make the speed relative instead of in absolute pixels. [15:14] *** benjiG has left [18:04] *** _abbenormal has quit IRC (Quit: Leaving) [18:33] *** ccr has quit IRC (Remote host closed the connection) [23:22] *** _bingbu_ has quit IRC (Ping timeout: 244 seconds)