[09:02] <phk> hi, im trying to get constellation pic using updatelee's kernel for TBS6903X device (DVB-S2X, stid135) but im not able to get clear clusters of dots [09:03] <phk> im getting all clusters corner of graph [09:03] <phk> and im getting QPSK graph even thou its 8-PSK [09:03] <phk> https://gitlab.com/updatelee/v4l-updatelee/-/blob/master/drivers/media/dvb-frontends/stid135/stid135_drv.c#L10901 [09:04] <phk> im trying to get I and Q sample using this function after line number 10901 in above linked file [09:04] <phk> ChipGetField(pParams->handle_demod, FLD_FC8CODEW_DVBSX_DEMOD_ISYMB_I_SYMBOL(demod), &i_sample); [09:05] <phk> ChipGetField(pParams->handle_demod, FLD_FC8CODEW_DVBSX_DEMOD_QSYMB_Q_SYMBOL(demod), &q_sample); [09:05] <phk> can anyone please help me of i have to add anything else or change anything... [09:45] <kalpu> Ahoj, I'd like to set the FPS a sensor can send out onto the mipi-link, is there already a ctl to set this? Or what's the best way? [09:48] <kalpu> I gues it's "s_frame_interval" ... [09:49] <jmondi> kalpu: it depends on the driver implementation [09:49] <jmondi> some driver supports frame interval configuration through VIDIOC_SUBDEV_S_FRAME_INTERVAL [09:50] <jmondi> some other allows you to change the H/V blank timings, so that you control the frame period through that [09:50] <jmondi> what driver are you dealing with ? [09:54] <kalpu> I'm trying to write one :) ... [09:55] <jmondi> kalpu: even better :) [09:55] <kalpu> My driver is working in the sense that I can get a stream, ... but it's a bit rough and I guess I need to brush it up... One thing is: I want to add correct settings for all the registers (i2c subdev) [09:55] <kalpu> So what should I implement so that the FPS can be set? [09:56] <jmondi> SUBDEV_S_FRAME_INTERVAL provides a simpler but a bit naive implementation [09:57] <jmondi> you give it a fixed number of FPS and the driver sets it, this mean the driver shall keep a list of supported FPS ranges and check apply the closest possible one [09:57] <kalpu> I do want to set the FPS explicitly because I believe (please correct me if I'm wrong) that the link "Sensor->FPGA-to-MIPI->SoC" needs to get a limit-FPS because the SoC can only handle 15FPS at 4k or so [09:57] <kalpu> jmondi: cool, I'll try that, I like naive :) [09:58] <jmondi> kalpu: eheh, but if you're looking to upstream the driver, and you have documentation to allow you to do so, allowing user-space to control the VBLANK period is really the way to go :) [09:58] <kalpu> yeah, in the end mainline would be better... [09:59] <jmondi> kalpu: does your sensor driver expose the subdev API to userspace or does it work with a bridge driver that only expose a single devnode ? [09:59] <kalpu> by the way, ... what is VBLANK ? I heard it lately a lot ... sry, but I currently only know "integration time" "Frame-Lenght" and a bit more... [10:00] <kalpu> jmondi: NXP has the "main-driver" for there mipi-stuff, but I enhanced it so that my driver gets a /dev/subdev-something driver I can connect to by v4l-ctl (or whatever it is calle) [10:01] <jmondi> VBLANK is the vertical blanking period that is between two frames. You shorten or enlarge it to control the frame period. The largest the blanking times (you could also control HBLANK, not only VBLANK) the lower the frame rate [10:01] <kalpu> jmondi: So it's a subdev driver that can expose stuff. [10:02] <jmondi> kalpu: I see, so be aware that if your bridge driver does not do so, you have to set the frame rate manually from user space [10:02] <jmondi> on the sensor v4l-subdev node [10:02] <kalpu> aahh, so in terms of the sensor datasheet this is either "Frame Length" (total) or "Integration offset" [10:03] <kalpu> Thanks, I'll try to dig a bit .) [10:03] <jmondi> naming usually depends on the manufacturer :) but yes, seems like it [10:03] <jmondi> have fun! [10:41] <kalpu> mhm, the bridge driver calls "v4l2_subdev_call(sensor_sd, pad, enum_frame_interval, NULL, fie);" on me but in v4l-subdev.c I do find "v4l2_subdev_call(sd, video, s_frame_interval, arg);". So what's the right way? Should I implement "enum_frame_interval" or "s_frame_interval" ? Or both? [10:48] <jmondi> kalpu: they have two different purposes, and if your bridge driver wants both, you have to implement both [10:48] <jmondi> https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-subdev-enum-frame-interval.html [10:48] <jmondi> https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-subdev-g-frame-interval.html [11:43] <gnurou> is it supported to perform polls from different threads on a m2m device? I am doing a little experiment using the vicodec encoder where one thread polls for OUTPUT buffers (EPOLLOUT) and the other for CAPTURE buffers (EPOLLIN), but after the LAST buffer any poll() on the OUTPUT queue never returns, even if buffers are still queued. Looking at v4l2_m2m_poll_for_data() I understand why this is the case, but is this a bug, or just [11:43] <gnurou> the desired behavior? [13:48] <tfiga> hverkuil: I think I'll send one more version of the vb2 MAINTAINERS update series [13:49] <tfiga> I learned that there is a CREDITS file, so I'll add some credits there :) [14:00] <ndufresne> gnurou: that's interesting, I do for sure (for statefull decoder/encoder) poll the OUTPUT and the capture queue from difference thread in gstreamer, I never notice that issue, but it feels like it can have performance impact (probably without breaking things) [14:01] <gnurou> ndufresne: positive or negative performance impact? :) [14:02] <ndufresne> gnurou: that being said, the draining flow in gstreamer is a bit more strick then what the kernel is support to offer, so when I need to "drain", I usually wait for that drain to complete from the OUTPUT queue threads, so I guess I actually never try and queue output buffers while draining, which clearly will have a negative impact on starting the [14:02] <ndufresne> next segment if there is something else after [14:04] *** sailus has quit IRC (Ping timeout: 244 seconds) [14:04] <ndufresne> So basically I get, QUEUE(OUT) , CMD_STOP(), wait for LAST/EPIPE signalled from the capture threads, release (streamoff/streamon), and then resume [14:04] <ndufresne> is this what you are trying to avoid ? [14:05] <gnurou> ah, so you never dequeue the remaining OUTPUT buffers after issuing the STOP command? [14:05] <ndufresne> not until it's drainged and reset [14:05] <ndufresne> but if you do resolution change, it means a slightly larger delay before you can start decoding again [14:06] <ndufresne> I suppose [14:06] <gnurou> right, meaning they are released by the STREAMOFF and not dequeued [14:06] <ndufresne> indeed, you are right, I use streamoff() to dequeue them all [14:07] <ndufresne> before I continue with another stream [14:07] <gnurou> that explains why you are not getting what I see. I have been following the encoder specification to the letter, which states after the STOP command the encoder must continue "... dequeuing processed OUTPUT buffers, until all the buffers queued before the V4L2_ENC_CMD_STOP command are dequeued" [14:09] <ndufresne> it should do that already though, since it must encode all remaining [14:09] <ndufresne> that worked for me with coda and venus, the number of queued out buffer and the resulting number of AU matched [14:10] <gnurou> and if I happen to dequeue the last CAPTURE buffer before polling for the next OUTPUT buffer, that next buffer is never signaled because v4l2_m2m_poll_for_data() returns early (and does not consider whether we asked for POLLIN or POLLOUT - it just considers the stream to be done) [14:10] <ndufresne> but it's correct that I don't try and issue DQ ioctl between CMD_STOP and last, since I have no use for that, and I then shortcicuit this with a streamoff [14:10] <ndufresne> it's very cornery case, but I think this should be fixed [14:11] <gnurou> now I don't see how dequeueing OUTPUT buffers could be a necessity for draining ; it can be useful if you want to start queueing the next stream, but it doesn't sound like a requirement [14:11] <ndufresne> what it could allow is to start filling the buffers during the draining process, and that seems like a huge time saver for some situation [14:11] <gnurou> so maybe I am trying to follow the spec too literally :) [14:11] <ndufresne> which encoding use case do you have btw [14:12] <ndufresne> since draining can be used for many reason [14:12] <gnurou> just toying around with vicodec, nothing serious [14:12] <gnurou> in this case this is the end of stream [14:12] <gnurou> so nothing comes after [14:13] <ndufresne> in the case of end-of-stream, I don't think it's very useful to dequeue the output buffers while draining, as you won't need any of these buffers for anything else [14:13] <gnurou> in any case, you seem to agree that the behavior I have seen is a bug? If so I can send a patch, I think I understand what is going wrong [14:13] <gnurou> you're right, it's probably not - I have just been following the spec blindly :P [14:13] <ndufresne> for resolution change, if the resolution is going lower, you may want to resue the larger buffers, so in that specific case, you could spare few sensor frame lost by passing them back to the capture HW before draining ended [14:14] <ndufresne> but yes, I think the behaviour is a bug [14:15] <ndufresne> draining should unlikely change the fact that a OUTPUT buffer is done result in poll(output) returning, I don't see any logic for doing it differently [14:15] <gnurou> yup, in that case it does make sense. I suspect the bug could go unnoticed for resolution change since after the CAPTURE queue restarts, the OUTPUT poll will likely signal again since `last_buffer_dequeued` won't be true anymore on the CAPTURE queue (that's what is blocking things in my case) [14:16] <ndufresne> yes, I think it would introduce a certain gap of time where buffer aren't being dequeued when they are done, but can go unoticed, as they will eventually be released [14:16] <gnurou> thanks for the insightful comments, I will both prepare a patch to fix this, and stop dequeuing OUTPUT buffers when a STREAMOFF if enough :) [14:17] <gnurou> s/if/is [14:17] <ndufresne> streamoff is a shortcut basically for EOS, thanks for spotting this, might spare someone times a lot when we get to do seamless resolution changes (hope to get to that level of quality some day) [14:39] <tfiga> gnurou: I think that's a bug in the specification [14:40] <tfiga> probably a copy paste from the decoder version that we forgot to edit out [14:40] <gnurou> tfiga: I'll see if I can rephrase this part too then. Thanks for confirming [14:40] <tfiga> at least this is suggested by the other point: "dequeuing the V4L2_EVENT_EOS event, if the client subscribes to it." [14:40] <tfiga> but we don't have such an event defined for the encoder [14:43] <tfiga> no, sorry, it's a legacy thing [14:43] <tfiga> s5p-mfc signals it and some userspace expected this [14:43] <tfiga> so the event needs to be there [14:43] <tfiga> but I don't recall any reasons to continue dequeuing the OUTPUT buffers [14:46] <tfiga> hverkuil: do you happen to recall something? [15:16] *** ric96 has quit IRC (*.net *.split) [15:16] *** ukembedded has quit IRC (*.net *.split) [15:16] *** pH5 has quit IRC (*.net *.split) [15:38] <ndufresne> tfiga: do you see a reason why we'd have to hold on OUTPUT done buffers being dequeued though ? [15:39] <tfiga> nope [15:39] <ndufresne> as I said, perhaps you will reuse these buffers after the drain is done, hence you'd be able to re-fill them sooner if it did signal the done buffers [15:39] <tfiga> the OUTPUT queue operates normally even in the drain sequence [15:40] <ndufresne> tfiga: use case in mind, 1080p buffers, and a camera sensor transitioning from 1080p->720p without re-allocation, and importing encdor buffers [15:40] <tfiga> that needs a resolution change sequence defined, but yeah, that should normally be supported [15:40] <ndufresne> unless I miss-read, gnurou was saying that between CMD_STOP and LAST, there is no signal sent to unblock OUTPUT [15:41] <tfiga> that sounds like a bug in the m2m framework [15:41] <tfiga> well [15:41] <ndufresne> it won't break application, but introduce a time gap, which is not very wanted for this cornery case I Guess [15:41] <tfiga> but actually, that might be a feature... [15:41] <tfiga> because, as you may remember, m2m framework is primarily targeting the m2m devices with 1:1 buffer relationship [15:42] <tfiga> codec support was added to it somewhat on the side [15:42] <ndufresne> yeah, the unfortunate design [15:42] <ndufresne> strangely, codec have been a much more important use case then any of the other m2m so far [15:44] <tfiga> I really think we need a codec framework [15:44] <tfiga> migrating any codec drivers using m2m to it would allow making m2m simple again [15:45] <tfiga> well, yes and no :) [15:46] <tfiga> also I think m2m might be good enough for stateless codecs [15:47] <tfiga> but maybe they could benefit from a stateless codec framework as well [15:58] <ndufresne> would you still make that a wrapper of vb2, or try and fix the vb2 by using something else while at it ?