#media-maint 2018-12-06,Thu

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)

WhoWhatWhen
hverkuilhi
coffee first...
[11:58]
meeting anyone?
or am I the only one who has set up a proper reminder for this meeting?
[12:03]
syounghi
I have a reminder, but I have to fit this in during worktime, can be difficult
[12:05]
pinchartlhverkuil: I'm ready
well, having lunch at the same time :-)
[12:09]
mchehabhi all
sorry for the delay... was finishing another meeting
[12:11]
hverkuilIt would be really nice if someone can review this series adding MC property support: https://www.mail-archive.com/linux-media@vger.kernel.org/msg136606.html
Still haven't received any feedback.
[12:11]
mchehabI'll put it on my todo list
from my side, I'd like to try to reduce the amount of pending patches to be close to zero for us to be able to test patchwork version 2
with ~600 patches, I don't think it is wise to try any migration
I sent yesterday the list of pending stuff we currently have via email
There are patches pending review since 2013!
[12:11]
hverkuilYou sent that list to me? I don't think I received a mail from you. [12:17]
mchehabI sent to sub-maintainers ML
Date: Wed, 5 Dec 2018 18:03:58 -0200
Subject: Status of the patches under review at LMML (570 patches)
[12:17]
hverkuilI don't see it there either.
Is that just me?
[12:18]
syoungI didn't receive that either [12:18]
mchehabit was hold at mailman :-(
message too big
[12:20]
hverkuilNote that I have some old patches in status TODO: those may need to be resurrected at some point in time, but are not 'active'. [12:20]
mchehabit was set to dever messages bigger than 20 kb
I increased the limit to 600kb
please check if it arrived
[12:21]
syoungI got it. [12:22]
hverkuilgot it [12:22]
mchehabok good [12:22]
hverkuilIs there still an auto-delegation rule for Hans de Goede?
I see one for him, that can't be right.
[12:22]
syoungin my queue there some patches that I need to rework or that are going through another tree, so I keep them in patchwork so I can track my work.
should I not do that?
[12:23]
hverkuilI do the same. It helps me remember that those patches are in progress and if I don't see new patches for a long time I can prod the author.
That said, I can probably clean up some of the older ones.
[12:23]
mchehablet me check
I guess we removed those
[12:25]
hverkuilWeird. HdG acked that patch, so perhaps he delegated the patch? [12:26]
mchehabno rule for HdG [12:26]
hverkuilI don't know, I'll delegate it to me and apply. [12:26]
mchehabok
my plan for today and tomorrow is to reduce the main queue, with probably means to delegate patches to you
(I guess the ones that are still there undelegated are because they're not trivial and would very likely require more than one person to review)
I'll then work on the ones on my own queue
[12:27]
hverkuilThere shouldn't be too many, I regularly check the undelegated patches for those that are for me. I think I'm fairly up to date there. [12:29]
mchehabwell, I applied a lot of stuff yesterday
(basically, patches from Oct-Nov)
I'll probably keep those two on my queue:
Jan,14 2015: [media] dvb-core: Fix frontend thread serialization http://patchwork.linuxtv.org/patch/27914 "bk0121.shin" <bk0121.shin@samsung.com>
Mar,25 2015: [RESEND] media: dmxdev: fix possible race condition http://patchwork.linuxtv.org/patch/28968 Jaedon Shin <jaedon.shin@gmail.com>
I really hate patches touching at dvb_frontend kthread... they tend to cause very bad side effects
those usually require a lot of testing
the other patches on my queue are patches that I wrote
I'll try to flush them along the end of this year
[12:30]
hverkuilIs there time this week for another pull request? If I am cleaning up my patchwork queue, then I can make a new PR at the same time. I have see some easy patches for a new PR.
brb
[12:34]
mchehab(they're probably patches that are waiting for someone to review)
hverkuil: yes
anything else?
[12:34]
syoungI'm in two minds about https://patchwork.linuxtv.org/patch/53287/. Now that we have BPF IR decoding do want to add more IR decoders to the kernel? We already have a BPF rc-mm decoder.
Any opinions on this?
[12:37]
mchehabI don't have any problems with it
the way I see is that, on some devices, IR is the only way to control it (like set top boxes and TV sets)
[12:39]
hverkuilback [12:41]
mchehabhaving the IR decoder in Kernel helps to be able to initialize the support early
without any special initrd
[12:41]
hverkuilI would also like a review of the series adding tag support for stateless codecs: https://www.mail-archive.com/linux-media@vger.kernel.org/msg136969.html [12:42]
syoungok. It also looks like a well-tested decoder [12:42]
hverkuil(sorry for the wrong Subject title, that was a copy and paste error) [12:42]
mchehabnot sure if the same could be done with BPF, but I suspect that the BPF code would be loaded too late...
if one would need to start a different runlevel, for example
[12:42]
syoungwell, if /etc/rc_maps.cfg is setup correctly, the BPF decoder would be loaded when udev starts and the hardware uevent arrives
in fact there is no kernel keymap which uses rc-mm so in order to use rc-mm this would have to through /etc/rc_maps.cfg and udev, just like with BPF
[12:44]
mchehabsyoung: for it to be able to alter runlevel, the BPF code should be part of the initramfs
so, the question is: does initramfs install /etc/rc_maps.cfg and the needed BPF code? I don't think so
[12:46]
syoungNo it doesn't. [12:47]
mchehabon STB/TV sets, it is very common to have some specific keystrokes to place the device into some "admin" mode [12:48]
syoungok [12:49]
mchehabyeah, so, I guess it is better to have in kernel decoders when it makes sense [12:49]
syoungI see your point, thanks [12:50]
mchehabhverkuil: I still think that timestamp and frame# should be enough [12:51]
hverkuilDid you read the cover letter? [12:51]
mchehabquickly [12:51]
hverkuiltimestamp is unreliable as an identifier.
Not sure what you mean with frame#: the buffer index? or buffer sequence number? Both cannot be reliably predicted.
[12:51]
mchehabsequence number [12:52]
hverkuilCan't be used. [12:53]
mchehabwhy? [12:54]
hverkuiltimestamp would be possible if it wasn't for the fact that what you put in isn't necessarily what you get back. [12:54]
mchehabthat sounds a bug to me
the timestamp you put should be the one you get back
[12:54]
hverkuilIn the request for the OUTPUT queue you have to give a reference to one or more CAPTURE buffers that contain the reference frame (currently there is only one OUTPUT buffer, but that can change when slices are supported).
The problem is that that means that you have to predict what sequence number the decoder will assign to the reference frame. But drivers can for one reason or another returns buffers in e.g. an error state and skip to the next buffer, thus making it impossible to predict.
[12:55]
mchehabok [12:57]
hverkuilTimestamps will be converted from one format to another. When the year 2038 support comes in it will be even worse. [12:57]
mchehabif the conversion is not reversible, the conversion is broken
I mean, it shouldn't matter if I represent something with HH:MM:SS or with an "u64 second"... it should be a 1:1 map
[12:58]
hverkuiltv_secs in the timeval struct can be either 64 bits or 32 bits, how will userspace know this? And internally the timeval is converted to a u64 (nsecs since 1970), thus losing more data.
Unfortunately, it is not.
If we would redesign this we would have a u64 for nsecs since 1/1/1970 (the internal kernel format).
Then it would be possible to use this as an identifier.
[12:59]
mchehabif y2038 patches are breaking 1:1 conversion, they're wrong and should be rejected
perhaps that's the right way to do
[13:00]
hverkuilThe problem is that tv_secs is a s32 for i686. [13:01]
mchehabinstead of changing the API to add tags, change it to use u64 [13:01]
hverkuilI can't, no more room in v4l2_buffer. [13:01]
mchehabif there's no room, where are you adding the tag? [13:02]
hverkuilreserved2, which is a u32.
There is no room for u64 fields.
[13:02]
mchehabthat hole reserved* was a mess that we've created [13:03]
hverkuilIn v3 of the patch series it was in a union with timecode, but I decided to go for reserved2. It's meaning is determined by a new flag, so it can be used for other things as well. [13:03]
mchehabit is a way easier to just increase the struct size and let the Kernel to check if the size matches the old or the new version of the API [13:03]
hverkuilAs mentioned during the ELCE, we need a struct v4l2_buffer replacement. [13:04]
mchehabok, so let's go for it and do it right this time [13:04]
hverkuilThe current buffer struct is a nightmare, not just because it is full, but also because of horrible 32/64 bit alignment issues. [13:04]
mchehabbetter than adding yet another crap glue [13:04]
hverkuilBut making a new struct is a LOT of work, and will take much longer. What stateless codec drivers need urgently is a proper way of tagging reference frames.
Hence this patch. It's perfectly fine to use reserved2 for that (it's unused after all).
Next year we should work on a replacement.
[13:05]
mchehabI'm against adding a tag... we're just making things worse
for the future
[13:06]
hverkuilWhy? [13:06]
mchehabas, once we add, we can't remove
let's just rewrite v4l2_buffer fixing timestamp
instead of adding another maintainance nightmare
[13:06]
hverkuilNo, I'm not touching that.
One option would be to add a flag that says that the timestamp is specified as a u64 instead of the struct timeval.
[13:07]
mchehabworks for me [13:08]
hverkuilBut what I hate is that we are overloading the meaning of timestamp: is it a time or a tag? [13:08]
mchehabit is time
not tag
no matter if it is u64 or timeval
[13:08]
hverkuilYou can have multiple OUTPUT buffers with the same tag, but those buffers were filled at different times. So it is either a tag or a time, it can't be both. [13:08]
mchehabsorry, I can't see what you're meaning
You're meanding that a: in_frame[1] -> out1_frame[1}, out2_frame[1]
it doesn't matter if [1] is a timestamp or a tag
you still have a 1:1 mapping
[13:09]
hverkuilOK. If we have a decoder, then it is fed compressed frames. Currently there is one compressed frame which is decoded to one uncompressed frame. Userspace tags the compressed frame, and that tag is copied to the uncompressed frame to identify it for future compressed frames that have to refer to it.
So there is a 1:1 mapping.
[13:10]
mchehabok, but, on decoders, the timestamp is (or should be) copied [13:11]
hverkuilBut if the buffers contain macroblock slices (i.e. not a full frame), then multiple buffers are needed by the decoder to decode to a single reference frame. [13:11]
mchehabso, if an out frame is for the in frame at time xxx, the same time xxx should be at the output frame(s) [13:12]
hverkuilEach of those buffers might have different timestamps since they arrived (when streaming video or something) at different times.
So now you have to ensure that all buffers that produce a single reference frame have the same timestamp.
[13:12]
mchehabif all slices correspond to the same frame input, they should all have the same timestamp
on m2m, the time from the output should match the one filled at the input, and not artificially produced by the driver
[13:13]
hverkuilWhy? They are in different buffers, and each buffer may have arrived at different times.
timestamps are set by the user, the driver only copies them.
[13:13]
mchehabyes, the driver should *copy* them, instead of replacing by any internal timing [13:14]
hverkuilthat's what happens.
I think we have a miscommunication somewhere.
[13:15]
mchehabyou'll still have:
in_frame[time1] -> out1_frame[time1}, out2_frame[time1]
[13:15]
hverkuilThat's because the timeval of the output buffer is first converted to a u64 for internal use, then back again to a timeval. [13:16]
mchehabforget about conversion [13:16]
hverkuilAnd that conversion is not always the same. [13:16]
mchehabassume that userspace will use the same u64 as kernelspace
with such assumption, and if the Kernel is copying the timestamp, userspace will be able to exactly match what output frames correspond to each input frames, using the timestamp
in other words, as I said before, there is a bug with timestamp handling, as it is not providing a 1:1 map
there are two possible fixes:
[13:16]
hverkuilBut then it is no longer semantically a time, it is a tag. I.e., you cannot assume time-like properties. It is just a way to identify buffers, and it doesn't have to be monotonically increasing etc. [13:19]
mchehab1) fix the conversion to ensure a 1:1 map
2) change the uAPI to pass the same u64 used internally at the Kernel
hverkuil: it is semantically a time
see, if I have frame #40 that was taken at 4:30:35, if I convert this frame to any other format (even splitting it into slices), the end result will still be a frame(set) that was taken at 4:30:35
[13:19]
hverkuilNo, it is a tag. It's just copied by the driver, userspace can interpret it as they want. stateless codec drivers will no longer put in timestamps (too cumbersome), but just indices into an array. [13:21]
mchehabso, you're actually proposing that the stateless codecs should *not* copy the timestamps? why? [13:22]
hverkuilI have slice #40 received at 4:30:35, and slice #41 received at 4:30:40, both are needed to produce a single reference frame. What timestamp would the reference frame get? [13:22]
mchehabthe time the Kernel receives the timestamp don't matter
what matter is the time where the image "snapshot" was taken
[13:23]
hverkuilIf I have to identify that reference frame in the next request, then this needs to be specified by the stateless codec specification, since userspace will have to know whether the driver will copy 4:30:35 or 4:30:40 to the decoded frame.
For m2m devices the driver never timestamps anything. It just copies timestamps from the output buffer to the capture buffer(s). The problem comes when you have multiple output buffers that produce a single capture buffer. Which timestamp will be copied in that case?
You need to be able to identify the capture buffer by its timestamp later, so userspace has to know that.
[13:23]
mchehabwait a moment: you don't identify frames with the request API by timestamp... you identify them by the request ID [13:26]
hverkuilMuch easier if userspace just provides a tag value totally unrelated to timestamps.
No, that's not what this is about.
[13:26]
mchehab(11:26:29) hverkuil: Much easier if userspace just provides a tag value totally unrelated to timestamps.
this will still not solve the problem
[13:27]
hverkuilA decoder produces reference frames. Later compressed P or B frames will refer to those decoded reference frames since those B and P frames contain basically the difference between the reference frame and the B or P frame. [13:28]
mchehabslice1{time1, tag1}, slice2{time2, tag1} -> cap_frame {???, tag1} [13:28]
hverkuilSo when creating a new request containing a B frame, then the meta data has to contain a reference to the decoded reference frame. [13:28]
mchehabwhat time should it place at cap_frame? time1 or time2? [13:28]
hverkuilright. [13:29]
mchehabhverkuil: anyway, we're diverging a lot from the submaintainer's meeting subject [13:29]
hverkuilWith tags it is predictable. You can use the timestamp field (well, if it is replaced by a u64 first), but then it is no longer semantically a timestamp, it has become a tag. [13:30]
mchehabI guess we'll need an specific meeting for that, and probably use some videoconferencing system that would allow us to have a sort of whiteboard
(or do it on a next media summit)
[13:30]
hverkuilThe cover letter has a link to a discussion with the codec devs on this topic. Just read it. [13:31]
mchehab"It is my understanding that ChromeOS was using the timestamp as the cookie value."
(that's at the discussion link you pointed there)
[13:33]
hverkuilThey don't have multiple output buffers combining to a single capture buffer.
If we make a new API it should cover those cases.
Future proof it as much as possible.
[13:33]
mchehabhverkuil: to be frank, I still don't understand why having two "cookies" (timestamp + tag) would make it any better
than just one
but, as I said, we're diverging from the theme of this meeting, and we're already out of time
we should schedule a meeting to discuss this on another time
[13:34]
hverkuilyeah.
I can try to set something up for next week.
[13:35]
mchehabok [13:36]
hverkuilAny days you know won't work? [13:36]
mchehabit sounds worth to have CromeOS people envolved on that too [13:36]
hverkuiltfiga. So a bit of a hassle with timezones. [13:36]
mchehabas they have some practical experience with stateless codecs
yeah, TZ is an issue
[13:36]
hverkuilideally ndufresne as well [13:37]
mchehabagreed
Alexandre too? it seems all started with his RFC patch
[13:37]
hverkuilI can add him.
cedrus devs as well.
[13:37]
mchehabyep
well, send an email to involved parties... let's see what works best for everybody
in terms of time/day
[13:38]
hverkuil(I can't even turn timeval into a union with a u64, the alignments in that struct are *awful*) [13:39]
mchehabI suspect that early morning in BR TZ for me would probably be best
as it will be mid day in EU and early night in JP
[13:39]
hverkuilyes, it's probably the only time that works. [13:40]
mchehabI still think that it is time to get rid of the old v4l2_buffer struct for the best [13:43]
hverkuilIn the long term, yes. We should. But getting the new struct right will take time. [13:44]
mchehabbtw, what's the difference of index and sequence there? [13:45]
hverkuilindex refers to the buffer index (you alloc 4 buffers, each gets assigned index 0-3). [13:46]
mchehabah
* @index: id number of the buffer
[13:46]
hverkuilThe sequence number is generated by the driver whenever it is done with a buffer and is always increasing. [13:46]
mchehabcrappy comment [13:46]
hverkuilThe sequence number can skip a bit as well to indicate missed frames.
And since it is a frame sequence counter it will be the same for the top and bottom fields if FIELD_ALTERNATE is used.
Anyway, the sequence number is generated by the driver, never the application.
[13:47]
mchehabhmm... if we add a reserved field just after "field", it should be easy to re-define struct timeval to u64
the size of the struct will change on both 32-bits and 64 bits
and Kernel can identify between a v1 and a v2 of it
[13:48]
hverkuilSo then we have a v1, a v2, and later a v3 (after we introduce a v4l2_ext_buffer)
That's why I chose to use reserved2 for a tag. It's simple, doesn't interfere with the layout of the struct, or with any timeval complications.
[13:49]
mchehabhttps://pastebin.com/jMh3urp3
but it won't solve y2038 nor align stuff
[13:51]
hverkuil??? Applications will have to be modified when they upgrade to the new header. You are breaking the API. [13:52]
mchehabno, if we change the name of it
and preserve the old one
[13:53]
hverkuilNo, don't go there. If we change the layout, then do it right and fix all that is wrong with the current struct and add new desired functionality. [13:54]
mchehab(but that's a detail to be solved later) [13:54]
hverkuilas discussed at the ELCE. [13:54]
mchehabyes, I agree with that [13:54]
hverkuilbrb [13:54]
..... (idle for 22mn)
mchehab: sorry, that took longer than expected. Just one last question: are you available in the morning all week, or only on certain days? [14:16]
mchehabfor now, I don't have anything scheduled (except for the maintainer's meeting), but, as I had to cancel this week's trip, I may need to do it next week - still waiting for confirmation [14:18]
hverkuilOK [14:19]

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)