For the irc meeting about tag support that's about to start:
In my cover letter to the patch series (https://www.mail-archive.com/linux-media@vger.kernel.org/msg136969.html) I wrote:
In particular, the output of ns_to_timeval(timeval_to_ns(tv)) does not necessarily match the input.
Actually (leaving aside y2038 issues), the output is identical to the input, provided the timeval is interpreted as a time. The actual tv_sec and tv_usec fields may not be the same.
This happens if tv_usec >= 1000000 since in that case tv_usec / 1000000 is added to tv_sec and tv_usec is set to the modulo of 1000000.
hverkuil: Aha, I know why I didn't notice this problem before - 4.4 still had the timeval in vb2
Also, tv_sec can be either 32 bit or 64 bit, and may be converted from 32 to 64 bit and back when running a 32 bit app on a 64 bit kernel.
When glibc becomes y2038 aware then this same conversion will also take place on a 32 bit kernel between apps compile for an older glibc (32 bit tv_sec) and newer apps that use a 64 bit tv_sec.
So timestamps are in my view not reliable for tagging buffers, there are too many conversions going on in several places. As long as you use them for actual timestamps, you are fine (until 2038, that is :-) ).
mchehab: ping
pong
gnurou: ping
sailus: ping
(tag discussion)
here
To recap:
hverkuil: Pong.
This issue is specific to decoders: a stateless decoder is fed buffers containing compressed data, and returns uncompressed frames.
Currently we only support a 1:1 mapping: one buffer with compressed data generates one buffer with the uncompressed frame.
what about encoders?
But future codecs may output buffers containing slices of the uncompressed frame (so a 1:N mapping), or the compressed buffers may contain slices so multiple compressed buffers generate one uncompressed frame (N:1). I guess in we should be able to handle N:M situations.
hverkuil: better to do a recap about the meeting itself
encoders do not have this problem, I'll explain in a bit.
as other people here may want to follow up
I first need to set out the problem that we need to solve, then I'll recap our meeting.
If that's OK?
The meeting today is related to this discussion: https://www.mail-archive.com/linux-media@vger.kernel.org/msg137014.html
(that should be enough :-) )
Not sure about N:M
Anyway, to work with a stateless decoder you need to be able to refer to earlier decoded keyframes.
I can only see 1:1, 1:N, N:1
So when you queue a compressed buffer you need to be able to 'tag' it so that you can refer to the future decompressed keyframe.
So in the 1:N case you tag the output buffer, and it will later generate N capture buffers that together form the keyframe.
For N:1 you give the same tag to the output buffers, and they decompress to a single keyframe containing that tag.
The question is whether to use reserved2 in struct v4l2_buffer as the tag (using a flag to indicate that it is used like that), or use the timestamp field instead.
Or even use timecode for this (or put the tag in a union with the timecode, as I did in earlier versions of the patch series)
hverkuil, tfiga, ezequielg: is this needed only for stateless codecs when slices is used?
I dislike using timestamp since it is really meant for timestamping.
No, it is also needed in the 1:1 case.
let's discuss first the 1:1 case
The problem doesn't occur for stateless encoders since userspace already knows the keyframes.
why do we need it for the 1:1 case?
mchehab: only stateless codecs need the userspace to explicitly specify buffers containing reference frames
hverkuil: I don't think we've ever considered stateless encoders
but I'd expect them to have the same property as a stateful decoder in this aspect
userspace would have to tell the encoder which earlier source buffers to use to encode further frames
tfiga: actually, you are right. But for stateless encoders you could actually rely on the buffer index since you know that.
hverkuil: yeah
Although we probably should use the same method as for decoders.
wouldn't it be confusing to have two different ways to refer reference frames for decoder and encoder?
To answer Mauro:
Userspace queues up a compressed keyframe + state, next it queues up a P frame + state. The state must tell the driver to which reference frame the P frame refers (since it contains the diff against the reference frame).
You do not know when queueing these compressed frames for the decoder which capture buffer will contain the reference frame.
yes, but (a reliable) timestamp seems enough for the 1:1 case
So you need to have some other way of identifying it.
on a M2M driver, timestamp is tagged by userspace at capture, and output should contain the same timestamp
(if not, this is a bug that should be fixed)
rephrasing my question: for 1:1, what's the problem on using timestamp (except for some bug that would require fixes)?
A tag is not a timestamp. You are re-interpreting the meaning of the field. Timestamps have unusual behavior (as explained above), and re-interpreting that is asking for problems.
tfiga: gnurou: any opinions on this?
you can call whatever you want... "timestamp" on a m2m device is just a tag
mchehab: hverkuil: on our side there is nothing really that makes one or the other better or worse
How about this:
well I'm not sure that the reinterpretation causes issues here
personally a tag sounds a bit cleaner
because we can define it cleanly
and get rid of the time-related legacy
each frame should have a different, unique timestamp anyway - or it's not a timestamp anymore
but for me, just buffer indices would be the cleanest, if we could make it easy for the userspace to determine them
yep, that's my point... a timestamp is an unique identifier
add a helper function v4l2_set_buffer_tag(struct v4l2_buffer *buf, s32 tag) { buf->timestamp.tv_sec = tag; buf->timestamp.tv_nsec = 0; }
for the 1:1 case
I wonder if not including buffers for both output and capture in the request wasn't a mistake
hverkuil: my point is: the way it is defined, "tag" just sounds an ugly hack
to the V4L uAPI as a hole
with affects not only m2m stateless codecs, but everything
tfiga: no, that was a good decision. I looked into it and it would become a nightmare.
tfiga: even if you include both in the request you still have the issue of specifying reference frames
Urgh, you can't use struct timeval in the state anyway: that struct should not be used in new APIs due to y2038 issues (and 32 bit vs 64 bit incompatibilities)
for the 1:1 case, I really prefer to use the timestamp. Of course, it should be reliable enough, in the sense that the same timestamp at capture should be present at output
gnurou: I'm not sure what issue you have in mind
struct timeval has different sizes, so using that in the state control is a no-go.
period.
of course, after Y2038 patchsets, timestamp should be u64
hverkuil: we're using buffer indices in our old downstream drivers using config stores
and there was nothing problematic about it
tfiga: for your specific use-case. It assumes that userspace can predict the buffer indices, and that's not possible in general.
tfiga: the problem with buffer indices is that some codecs may deliver buffers on a different order
mchehab: hverkuil: but stateless codecs are exactly not supposed to do so
all the order handling is expected to be on the userspace
of course that wouldn't work for stateful codecs
so I can see a problem here with the behavior being inconsistent between stateful and stateless
tfiga: we had this discussion before: drivers can for one reason or another mark capture buffers as ERROR and skip to the next one. I think it is a very fragile approach. And it won't work for 1:N situations.
hmm... so userspace would reorder the frames to be something like I-frame I-frame P-frame B-frame P-frame ?
And since slices are not uncommon, I would expect that to become a problem. You don't want to give it an array of capture buffer indices.
mchehab: normally the frames are ordered in decode order in the bitstream
mchehab: and userspace would give them to the decoder in the same order
mchehab: but after decoding, the userspace needs to reorder the decoded frames into display order
hverkuil: but that would effectively be the same as tags - because you would be giving an array of capture buffer tags
the error handling is a valid problem, though
tfiga: No, all output buffers generated from the single capture buffer would have the same tag.
I still think that the best is to use, for the 1:1 case, the timestamp (of course, converted to u64)
I think you mean all capture buffrs generated from the single output buffer
tfiga: sorry, yes.
for slices, we should have a flag indicating that...
then, it would make sense to add a "slice number" field
but then, how would you refer to particular capture buffers with tags?
mchehab: what you do mean exactly with 'converted to u64'?
we already discussed that on the past meeting...
converted to u64 in the kernel? Or in struct v4l2_buffer?
struct timeval is not y2038 compliant
also in general it doesn't sound like it would make sense to decode slices into different buffers
how would you display them?
v4l2-buffer should be changed to support u64 timestamps
instead of struct timeval
yeah - we don't want to refer to slices, only to full decoded frames
also, I wonder how the decoder could use the frame as reference if it was scattered into separate buffers slice-by-slice
with slices, the only safe way that would work for both state and stateless codecs is to have it as a separate field
mchehab: you can't do that, not without changing the struct size. And it doesn't matter for y2038 since struct timeval will change once glibc switches over to a y2038-proof struct timeval (and we need to add compat code for that as well)
An alternative is to go back to my previous patch series that just put the tag in a union with struct timecode. Actually, if we want we can use timecode as the tag. It already has all the same properties as a tag, without any of the crappy timeval problems.
hverkuil: what I want to do is to not have hacks
some weird badly defined field that only works for a single specific usecase
Why is using timestamp a hack, but timecode not?
making it part of an union doesn't make it look any better
Sorry, other way around :-)
hverkuil: it is not green field...
sorry?
V4L2 always called the ID with uniquely identify the frame sequence as timestamp
(for the good or for the worse)
changing this just for one specific usecase is plain wrong
Indeed, and that's why I wanted to add a new field that doesn't change any of the existing fields.
mchehab: hmm, the documentation seems to specifically refer to timestamp as time
(also, there's a 1:1 map between a frame sequence and a timestamp - except when we have slices)
and there is the "sequence" field in v4l2_buffer that's referred to as "sequence"
sequence is generated by the driver, you can't use it as an identifier since you have no control over it.
tfiga: yes, that's part of the problem.... there are too many fields there with very similar concepts
we're specifically looking for a way to refer to a buffer that was used as a destination for a decode
for the 1:1 m2m case, sequence could be just a copy from the capture buffer, but it is probably to late to  change the semantics
for the current v4l_buffer
mchehab: you can't. Also, sequence numbers are identical for the top and bottom fields when using V4L2_FIELD_ALTERNATE.
we can do that for a new ioctl using a v4l_buffer_v2
This is basically the whole point of the tag field: nothing else really fits.
hverkuil: what was the problem with using OUTPUT buffer indices?
And yes, we can clean this up to some extent in a new v42_buffer version, but that will take a lot more time to design that.
tfiga: twofold:
hverkuil: there's already a downstream hack that has been working (using buffer indices)
it won't work for the N:1 case
let's do it right instead of adding more hacks upstream
why it wouldn't work for N:1 case?
if all the N OUTPUT buffers would map into the 1 CAPTURE buffer
then the userspace could just use any
2) it would require userspace to keep the OUTPUT buffer around while the corresponding CAPTURE keyframe is in use.
okay, that's fair
a problem indeed
for the 1:N (and N:1) case, IMO, the best is to have a separate field
I mean: nothing prevents that for frame #1, to use 3 slices, and for frame #2, to use 5 slices
if we want a solution that would work for both statefull and stateless codecs, the only safe way is to specify both the frame number (or timestamp) and the slice number.
but for stateful codecs there is no need to specify anything
they handle reference frames internally
right.
tfiga: userspace will still need to know what frames will be grouped together
grouped together?
yes, to compose the full frame again
a stateful decoder produces full frames
what would be the use of a slice number?
I could imagine it decoding slice-by-slice and notifying the user that one slice was decoded
but the buffer would be garbage before all the slices were decoded
if there a point for user-space to know that a slice is decoded?
isn't the basic unit of interest a fully decoded frame?
I can imagine one could blit single slices to the framebuffer without waiting for the whole frame to update the display
yes. I suspect that this could help to reduce latency
but then the interface would need much more
the decoder would have to tell the userspace where the slice is in the buffer
sending slices to the GPU driver
position and slice
position and slze*
and I believe that would still be one buffer for all the slices
so we would allow userspace to dequeue half-decoded capture buffers?
because for the next frame, the whole previous frame may be used as a reference
gnurou: yeah, that's the idea
you selectively update the video
gnurou: I don't see why not
if we open that pandora box, why not do that for stateful codecs as well then
but I don't think we're anywhere close to supporting that
I personally don't see why anyone would want to do something like this
yeah, true, but the point is: why do that just for stateless?
gnurou: for stateful codecs we would have to tell the userspace what the slice is
for stateless, the userspace has this knowledge already because it parsed the bitstream
supporting things like that will take a lot more work, no doubt about that. What I want to prevent is creating an API that might be insufficient in the future.
I mean, if slices make sense, it likely makes sense for stateful codecs too
and thus why we should complicate the interface for such an exotic use-case
we should design something that will be able to address both cases
mchehab: it does. And it is indeed used to reduce latency. We used it in the past for video conferencing.
mchehab: hverkuil: my point is that this case doesn't need anything special from reference frame management
it's just N:1
IIUC the point of slices is to allow the decoder to be kicked in earlier, especially in slow network conditions
hverkuil: we had some internal discussions about that when talking about implicit fences support
I don't understand it as the ability to display invalid frames to the user
gnurou: yeah, given that the screen has a referesh rate too, I think there's no big deal
it would make sense for some usecases with stateful codecs
mchehab: I don't think we can easily support OUT fences with stateful codecs
because of the internal reordering
but for stateless codecs indeed
but then that actually imposes the requirement of decoding _exactly_ into the buffer given
so there is no place for skipping buffers
or reordering
and CAPTURE indices would just work for reference frames
OK, it seems I'm making no headway with my tag proposal, so lets look at how to do this using the timestamp field.
tfiga, gnurou: are you already working with slices for codecs?
mchehab: not sure what you mean by working with slices
stateless codecs decode slice-by-slice
for H.264
for VPx, there is no notion of slice
tfiga, I'm actually meaning the 1:N and N:1 usecases
are you in a hurry for such things?
mchehab: 1:N is plain impossible because a hardware can't just use multiple buffers as 1 reference frame
mchehab: not yet
ok
N:1 is a fair case
yeah
but we haven't found any need for it
and with N:1 it seems that both timestamp and tag are usable
usually the decode of the whole frame is faster than the screen refresh rate anyway
in both cases, all slices of the same frame should have the same tag/timestamp
so, IMO, we should focus on solving the 1:1 case right now. Let's use timestamp (properly fixing any issues with it)
OK, I'll give up on the tag.
So struct v4l2_buffer has a struct timeval.
hmm, I personally prefer tag over timestamp, if we don't go with buffer indices...
we can later address the N:1 (and 1:N), but I'd like to plan on an API that would work also for stateful codecs
tfiga: buffer indices are out of the question anyway IIUC (unpredictable in case of errors, etc)
(or that it could be easily extended for such cases)
why would we have an error that would let us retry decoding into another buffer?
I mean, it could be a buffer mapping error, but it sounds like a critical error anyway
for the upstream code, I prefer not using buffer indices
We cannot use the struct timeval in the compound controls that contain the state. It has a different layout depending on the architecture, and we do not support compat code for controls.
(you could do it downstream, if you ensure that the driver won't mangle with it)
yeah, that struct timeval is the thing that makes the timestamp ugly for this purpose
hverkuil: it is time to get rid of struct timeval for good, and ensure that the buffer struct is well aligned on 64 bits
mchehab: that requires a v2 of struct v4l2_buffer.
OK
speaking of v4l2_buffer, there are many other problems that we need to fix there...
So we can make a v2 which is basically the same as v4l2_buffer, but fixing alignments and turning the timestamp into a u64.
hverkuil: you already convinced me about such need at the Media Summit :-)
but we want a lot more, and that will take much more time.
v4l2_buffer2 will likely take a long time to design though
I guess as long as we make v4l2_buffer2 easier to extend, we could start quickly
time during which we won't be able to finish statless codecs
Right.
I liked Hans' tag proposal because it would allow us to move forward immediately, without adding much complexity
And making a v2 will not help you with y2038 issues since we HAVE to provide y2038 compat support for the current struct timeval regardless.
sorry, I meant: 'for the current struct v4l2_buffer'
hverkuil: you know already what's broken there. Come up with a proposal that would be extensible
hverkuil: I'm not so sure
mchehab: I know some of what is broken.
mchehab: talk to Arnd about y2038.
I mean, by the time we reach 2038, all apps will be using v4l2_buffer_v2
but that's a separate question that we can discuss later with arnd
Yeah, that needs to be discussed with Arnd. I'm not taking that responsibility.
hmm, so to unblock us quickly, could we use timestamp for initial work on statless decoders, even if it's ugly and then solve it properly in v4l2_buffer_v2?
hverkuil: an easy fix for v4l2_buffer to avoid the compat thing would be to create a struct v4l2_old_timeval, identical to struct timeval :-p
and adding WARN_ON if someone ever use it after Y2038
although arguably adding the new tag field would make it much easier to implement on kernel side
tfiga: that would probably work
then we can take our time and implement proper semantics on top ov v4l2_buffer_v2
tfiga, gnurou: I don't see any issues with such approach
the point here is that we'll need to maintain the old v4l2_buffer for compatibility
we will probably have a converter from v4l2_buffer to v4l2_buffer_v2?
yes
that should also take care of converting the timestamp into something usable as well
We can use a u64 in the state control and add a simple static inline that converts the struct timeval to the u64 in the same way that the kernel does it.
we'll need that at Kernel side
the point being that the timestamp is unique for each frame - as long as we have this, we should be good to go
The u64 would be valid for a v2 of the struct as well, except there is no need to convert anymore.
and for the N:1 case, using the same timestamp for all the slices of a frame will also work
gnurou: good point... indeed it should work
static inline s64 v4l2_timeval_to_ns(const struct timeval *tv);
gnurou: using the same timestamp would mean the timestamps are no longer monotonically increasing. I'm fine with changing the spec (where needed) to drop any requirements about this for m2m devices, but it is an API change.
hverkuil: it all depends on what we mean by "timestamp"
my understanding is that this is the time of the whole frame - so all slices of the same frame should bear the same timestamp
well, that's why I hate using timestamp for this. It has a specific meaning, and we now want to use it as a tag.
hverkuil: or does it mean something else?
It has never been defined what it means for m2m devices. It has a specific meaning for output devices and capture devices, but not for m2m devices.
It's just copied.
hverkuil: I saw in the previous discussion that you mentioned the time at which the slice arrives from the network
... but what it the video is loaded from disk? :)
The implicit assumption is that it is set to the time of whenever the buffer was created. It certainly is assumed to contain a time.
my point is, I don't see how any definition of timestamp other than the one above could universally make sense
hverkuil: no, it is initialized like that on drivers, but the original idea (AFAIKT) is to just have a time reference inside a frame
ideally, it should be tagged by the hardware itself
in order to allow sync between audio and video
To go back to my proposal: use a u64 (nsecs since 1/1/1970, same u64 that is used internally in vb2) to refer to key frames in the compound controls.
(I will be moving now and replying on my phone, please forgive any delay)
And add a static inline s64 v4l2_timeval_to_ns(const struct timeval *tv) to convert the timeval to this u64.
When we create a v4l2_buffer_v2 it will have a u64 instead of a timeval, and you can just use the u64 everywhere.
from my side, OK
mchehab, hverkuil: there are two distinct problems that need to be solved for y2038:
a) ensure that user space with a redefined 'timeval' structure still works, by either not using timeval or implementing each ioctl for both timeval layouts
b) make sure that any 32-bit timestamps don't overflow, by either forcing CLOCK_MONOTONIC stamps, or forcing the CLOCK_REALTIME stamps to be unsigned 32-bit (this only gets us until 2106, but that should be enough to have everyone migrated to 64-bit systems
We do however need both a) and b) in the near future, not just as we get closer to 2038 because there will still be 4.xx kernels running in 2038
arnd: on V4L2, we use struct timeval only at v4l_buffer... from what you can see from the above discussions, it has other problems, so we want to get rid of it soon enough, in favor of a new struct that will just use u64 for timestamp (using the same one that it is used internally at the Kernel)
mchehab: the point is that we still need compatibility for existing source code referring to v4l_buffer, at least for a couple of years I assume
from my side, I would very much prefer to avoid applying any y2038 change there, in order to simplify the backward compat stuff
and that timeframe definitely overlaps with the earliest users of y2038 safe glibc
when are you foreseen the early usage of a y2038-safe glibc?
I hope next year, probably musl before glibc, but they should both become available soon after the syscalls get added to the kernel
I've had people asking for backports to stable kernels as well
backporting to 4.14 is currently out of the question, but 4.19 may be possible
that part is not decided though, and I'd prefer not to backport to 4.19, but I'm not stopping others who want to do the work
perhaps we should place that odd y2038 compat stuff on a separate file (and a separate include file), to make easier to strip it on some future
the stuff on V4L2 API - i mean
in practice, I suspect that very few apps use the timestamp
mchehab: note that VIDIOC_DQEVENT has to be adapted for y2038 as well.
it depends on how you want to do it. the easiest way would be to go with changing the v4l2 header to using __kernel_old_timeval
hverkuil: also another struct that it is not 64-bits aligned
Yeah, sucks...
I did a y2038 conversion for that ioctl in the past, I need to update it.
I think I had two earlier attempts a few years ago
I also forgot all the details
the important questions to ask are:
https://git.linuxtv.org/hverkuil/media_tree.git/commit/?h=v4l2-buffer&id=4a58dec83fa563a66996558507a521109380140a
1. do we ever use CLOCK_REALTIME timestamps (ktime_get_real etc) or just CLOCK_MONOTONIC?
2. is there user space that uses a copy of the kernel uapi headers instead of including the version from /usr/include ?
DQEVENT is guaranteed to be MONOTONIC. The timeval in struct v4l2_buffer is either MONOTONIC or 'unknown', i.e. for the latter we would have to review all drivers.
quick grep:
drivers/media/usb/uvc/uvc_driver.c:             return sprintf(buffer, "CLOCK_REALTIME");
drivers/media/usb/uvc/uvc_driver.c:             uvc_clock_param = CLOCK_REALTIME;
It really should be monotonic, but uvc is probably an exception.
two drivers (except for vivid and dvb av7110 driver) uses ktime_get_real():
drivers/media/rc/streamzap.c:           sz->signal_start = ktime_get_real();
drivers/media/rc/streamzap.c:   sz->signal_start = ktime_get_real();
drivers/media/usb/uvc/uvc_video.c:              return ktime_get_real();
well, streamzap is RC
different API too
yeah, just UVC driver seems to use it
static inline ktime_t uvc_video_get_time(void)
{
	if (uvc_clock_param == CLOCK_MONOTONIC)
		return ktime_get();
	else
		return ktime_get_real();
}
it seems it supports both
the streamzap usage is also not for the uapi, and presumably should be changed to ktime_get()
default is monotonic
unsigned int uvc_clock_param = CLOCK_MONOTONIC;
arnd: yeah, likely.... syoung, could you please take a look on it?
The vivid usage is valid, no need to change anything there.
(well, it is a virtual driver anyway, even if it was an issue there, we can easily change, as this is not for real hardware)
on UVC, there's a sysfs node that changes between monotonic and realtime:
module_param_call(clock, uvc_clock_param_set, uvc_clock_param_get,
		  &uvc_clock_param, S_IRUGO|S_IWUSR);
It generates RDS and VBI timestamp data that is defined by the corresponding standards to be real time. It isn't impacted by glibc or y2038.
is it likely that anyone actually sets the uvc.clock=realtime module parameter?
arnd: no idea
pinchartl would know better
we had similar options in drivers/gpu/drm/ and removed them to only allow CLOCK_MONOTONIC without anyone ever complaining
commit 310fe52461e6244b01a04b011c2e886d6b69de16
Author: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Date:   Wed Dec 9 22:57:48 2009 -0300
    V4L/DVB (13827): uvcvideo: Switch to a monotonic clock for V4L2 buffers timestamps
from the patch's description, it seems that the parameter was added just in case the change to CLOCK_MONOTONIC would cause regressions
    The realtime clock provided by do_gettimeofday() is affected by time
    jumps caused by NTP or DST. Furthermore, preliminary investigation
    showed that SMP systems the realtime clock is based on the CPU TSC,
    and those could get slightly out of sync, resulting in jitter in the
    timestamps depending on which processor handles the USB interrupts.
    
    Instead of the realtime clock, use a monotonic high resolution clock to
    timestamp the buffer. As this could in theory introduce a regression
    with some userspace applications expecting a realtime clock timestamp,
    add a module parameter to switch back to the realtime clock.
It seems that it would be safe to remove it
arnd: let's assume all timestamps in v4l2 are monotonic, or can be made that way.
https://lists.ffmpeg.org/pipermail/ffmpeg-user/2014-March/020615.html suggests that at least four years ago, someone was using it
'For alsa, it uses abs time': not sure what is meant by that.
https://lists.ffmpeg.org/pipermail/ffmpeg-user/2014-March/020631.html
"	Yes, I tried that, but the resulting output file has the same sync issue (video leads the audio by about a half second). "
it sounds that someone tried, but didn't work as he would be expecting
https://lists.ffmpeg.org/pipermail/ffmpeg-user/2014-March/020674.html
it seems to be due to some Kernel backward compatibility on ffmpeg
well, there's a simple way to identify... remove support for it... if someone complains, revert the patch :-p
ok, but when built with new kernel headers (which you have to do on a y2038 safe glibc), you will always get CLOCK_MONOTONIC, because that macro is defined
The current code is a bit different there: https://git.ffmpeg.org/gitweb/ffmpeg.git/blob_plain/1f1ec958f6c68a5ceafea206a99c895f62d0f3ec:/libavutil/time.c
seriously, if this is required for Y2038, maybe the best would be to post a RFC patch with a good explanation about why we should remove support for CLOCK_REALTIME, copying a few important userspace mailing lists (ffmpeg, vlc, gstreamer) and see if anyone complains
you need to also check libav
as the fork could be different
I suspect it already works fine with all of them, given that you only get the CLOCK_REALTIME behavior in one specific driver, and only if you set a module parameter
true, but almost all cameras on laptops and modern USB cameras are UVC
basically, gscpa is used on older models (and em28xx and a few other drivers on some very specific models)
the vast majority of modern cameras for PC uses uvcdriver
https://git.libav.org/?p=libav.git;a=blob;f=libavutil/time.c has the same behavior as ffmpeg
(although most users don't add any special modprobe parameter)
I just verified that Ubuntu doesn't set it (I did not expect that it would)
most likely the only reason you would set that parameter is to run an ancient ffmpeg binary
I strongly suspect that it should be safe to remove
we can probably even leave the module parameter present for that use case, and hope that the people that rely on it today will stop doing so before y2038, so I don't think it's a big problem either way
or add a big warning if someone sets for realtime
removing it would be cleaner since it removes one thing that can go wrong long-term, replacing it with one that may go wrong sooner when people still remember why we do this
good idea
pinchartl will need to look at this as uvc maintainer. What I am interested in is if we only use monotonic, what impact will that have on the y2038 work?
hverkuil: using monotonic times consistently in particular means that we don't care about extending the timestamp for the purpose of correctness
since monotonic times are guaranteed to not overflow (specifically, not for 136 years after boot, which should be good enough for everyone)
Inside the v4l2 subsystem, you mean?
in an ABI
so your patch https://git.linuxtv.org/hverkuil/media_tree.git/commit/?h=v4l2-buffer&id=4a58dec83fa563a66996558507a521109380140a
is not needed for y2038, there is no reason to change the old v4l2_event structure to include a longer timestamp
you might want to include it for reasons other than y2038 that I don't care about, but then I'd suggest doing that as a separate series
But struct timespec changes size when glibc becomes y2038 enabled, so that still means we need to handle the compat stuff, right?
Hmm, that might not have been an issue for this ioctl. Too long ago since I wrote that.
I'm starting to suspect that the best would be to just adopt CLOCK_REALTIME for v4l_buffer too
hverkuil: and that depends on the other question I raised earlier: do we have reason to assume that anyone uses a copy of linux/videodev2.h instead of including the version from /usr/include/ ?
arnd: yes, that's pretty much guaranteed.
that they have a copy.
ok, then we have to handle both versions
I suspect that not *all* apps
mchehab: one important app doing that is enough
libv4l has a copy of it
(with is maintained by us, btw)
90%? Most that I know of do this. Heck, we do that at Cisco!
no, I think that ship has sailed then. It means we do need something along the lines of
https://www.irccloud.com/pastebin/28keSvqS/
hverkuil: Cisco is likely biased because of you're one of v4l-utils maintainer
any details of that patch can be changed, the important part is that user space might see one of two different structure layouts depending on their libc, and this changes the ioctl command code
so the kernel has to implement both
Nothing to do with that. We have software that runs on different kernels depending on the BSP from each vendor. So we can't rely on installed kernel headers, since they will be from different kernels versions.
(it also means that uvc can keep using CLOCK_REALTIME, since anyone relying on that in 2038 will see 64-bit timestamps)
arnd: the arm 32 bit architecture is already y2038 safe, right? This was specifically an issue for i686.
no, this is all 32-bit architectures
there are two quirks that affect x86 specifically, but they only impact the finer details, not the general idea:
I probably confused it with the x86 quirks.
sounds to me that, for y2038, we can have the backports at: drivers/media/v4l2-core/v4l2-compat-ioctl32.c
- i686 uses 32-bit alignment for 64-bit types,  which is why I added the "
__attribute__((aligned(sizeof(time_t))));" in the modified structure definition
- x86-64 supports the x32 ABI which is the only 32-bit user space that does have 64-bit time_t already
Having just this hunk applied to uAPI header:
+#ifdef __KERNEL__
+	struct {
+		__s64 tv_sec;
+		__s64 tv_nsec;
+	} timestamp;
+#else
 	struct timespec			timestamp;
+#endif
mchehab: yes, that is the minimum change for that file
all the rest can be confined at the compat32 file
mchehab: right, provided we start using that compat32 file on 32-bit architecutres
on native 32-bit, we need to handle both versions of the command in the native ioctl handler
on 64-bit kernels, we need to handle the 64-bit version in the native handler, and both the 32-bit and 64-bit versions of that structure in the compat handler
there are multiple ways to implement that
hmm... good point
I would for simplicity just implement both in the native handler, which means that native 64-bit apps can now also call the ioctl command for the 32-bit structure. it's a bit ugly that way, but requires less kernel code
the second way is to handle the 32-bit structure in the compat file, and call into that from the native 32-bit fops. This keeps it nicely separated, but the call chain is a bit ugly
as an advantage, the legacy code won't be there on 64 bits machine
at least when they disable CONFIG_COMPAT, otherwise it is still needed in the compat handler
IMO, this is good, as this code tends to be forgotten after some time
and would be a potential point for a security exploit
the cleanest abstraction might be to split out the time32 handling into a new file and call that specifically from the compat ioctl handler for 64-bit and from the native handler for 32-bit
yes, that's what I'm thinking about
mchehab: when will you post the pull request with the remaining 4.20 media fixes to Linus?
mchehab: tfiga: gnurou: paulk-leonov: posted v5 of the series formally know as the 'tag' series :-)
mchehab: introducing the v4l2_timeval_to_ns helper function solved a lot of problems.
These irc sessions may be bad for the blood pressure, but they certainly good for new ideas.
hverkuil: thanks, I'll take a look
hverkuil: why not to use timeval_to_ns()?
hverkuil: ah, this is UAPI
right.
hverkuil: I think one of the times we discussed the timeval stuff, you asked about __kernel_old_timeval, which was not there at the time, but is now
There is also ns_to_kernel_old_timeval() but no kernel_old_timeval_to_ns() yet
Yes, I know. I was planning to use that. Things have improved since the last time I looked at this.
hverkuil: I intend to submit today
arnd: regarding uvc.clock=realtime, I don't know
I'm tempted to just drop it and see if anyone complains
note that Android uses CLOCK_BOOTTIME, not CLOCK_MONOTONIC
pinchartl: ok, but note that the later discussion clarified that dropping it is not needed for y2038 support
I don't think that's a bit issue though
since the events have to use 64-bit timestamps anyway, neither monotonic nor realtime ones can overflow
dropping realtime stamps in uvc may still be a useful cleanup, just completely separate from y2038
arnd: regarding your second question, several applications include private copies of kernel headers
pinchartl: yes, that was the reason we concluded that the kernel has to do 64-bit timestamps for v4l2_event
pinchartl: imho, for coherency with other drivers, it would make sense to remove it, but we need to be careful... perhaps adding a warning if people use it, keeping it for a while to see if people complains. If not, drop on some next future Kernel release
since that is what the private copies of the header define when combined with a fixed glibc
mchehab: I think it makes sense to remove it, yes
I'm still wondering what we should do about CLOCK_MONOTONIC vs. CLOCK_BOOTTIME
CLOCK_BOOTTIME is used only at Android's specific Kernel. There's nothing we can do to prevent vendors to do whatever mess they want on their distros
(i mean  CLOCK_BOOTTIME on V4L2 drivers is used only at ....)
that's not the question. I wonder what we should do about CLOCK_BOOTTIME in mainline
it has use cases
pinchartl: hverkuil: any news on [PATCH v6 0/2] media: usb: pwc: Don't use coherent DMA buffers for ISO transfer ?
When you say Android uses BOOTTIME: is that for all media drivers, or just uvc? I don't see anything about this in the Tegra Android kernel (4.4) that we are using.
ezequielg: I leave that to pinchartl. I'm not involved unless someone pings me.
ezequielg: didn't I already comment that I was fine with the approach ?
hverkuil: as far as I'm aware, for everything, especially sensors (as in motion sensors for instance)
hverkuil: pinchartl says his fine :-)
and thus for video as well, to synchronize with sensors
hverkuil: I have already Acked-by that patch. Are we in time for this merge window?
pinchartl: so what do they call? Not ktime_get? Just trying to understand what I need to grep for to see if that's present in our tegra android kernel.
ezequielg: I was actually expecting that pinchartl would make a pull request for it.
hverkuil: I wasn't planning to make a pull request :-)
hverkuil: you are listed as the sole maintainer for that driver.
perhaps we could ask Matwey if he has time to co-maintain it?\
since he has hw
i'm trying to find ways to enable people, and avoid stalling like this in the future
hverkuil: look for ktime_get_boottime() or ktime_get_boottime_ts64() in modern kernels
pinchartl: please Ack patch 2/2 and I'll make a pull request.
hverkuil: ezequielg that would make sense to me
oops
I meant ezequielg: that would make sense to me
hverkuil: https://patchwork.kernel.org/patch/10644887/
as we'll have -rc7, I may still apply a few pull requests, if not too intrusive
hverkuil: in older versions also ktime_get_boottime_ts(), get_monotonic_boottime64() or get_monotonic_boottime()
ezequielg: the problem was that I just have no knowledge of that particular area of the driver, it required a usb expert.
sailus: it is probably too late for "[GIT,FIXES,for,4.20] Fwnode parsing fix"
mchehab: when do you plan to send your next pull request for v4.20-rc fixes ?
pinchartl: already sent
but not pulled yet, right ?
https://lore.kernel.org/linux-media/20181212135403.3ce9132a@coco.lan/T/#u
it was sent today. Give people some time to apply it upstream ;-)
:-)
sailus: I suspect that, on this request: Subject: [GIT, PULL, for,  4.20] META_OUTPUT buffer type and the ipu3 staging driver
you're actually meaning 4.21
ezequielg: I'm reviewing the pwc patch
sailus: regarding the IPU3 driver, I think documentation should be merged at the same time as the driver
adding a new driver at -rc7 is something that should be avoided, except if you have a very good reason
even if it's in staging
otherwise it will complicate updating the driver and keeping the documentation in sync
sailus: I agree with pinchartl... please add the documentation patches on your pull request
and resend it to me up to tomorrow, and I'll include for 4.21
mchehab: Ack. I'll add the documentation patches.
Yes, it was indeed meant for 4.21.
mchehab: The driver API may still change, including the documentation.
It's a staging driver, so that's not unexpected.
the fwnode change can be postponed to 4.21? It will likely be just one week of difference
I mean, if I merge it for 4.20,  I
mchehab: It's a bug introduced for 4.20.
I mean, if I merge it for 4.20,  I'll likely be sending only next week
at -rc7
I don't think it's crucial to have for 4.20 outright.
with will likely be the last week for 4.20
Ack.
ok, so I'll just put it for 4.21 with a cc to stable for 4.20
mchehab: Sounds good.
ok
hverkuil: handling aspeed driver now...
I'll add a patch to note that the IPU3 driver API is still subject to change, if that's not there yet.
I wander why checkpatch keeps doing some noise with new Kconfig options:
WARNING: please write a paragraph that describes the config symbol fully
#53: FILE: drivers/media/platform/Kconfig:35:
+config VIDEO_ASPEED
mchehab: I'll mark the IPU3 pull request as "changes requested".
This is a good point, though:
CHECK: usleep_range is preferred over udelay; see Documentation/timers/timers-howto.txt
#574: FILE: drivers/media/platform/aspeed-video.c:492:
+	udelay(100);
The patch adds a comment above the udelay that it can be called in interrupt context.
I won't delay it just due to that, but if the driver doesn't really need a precise time, usleep_range is better
can't do usleep_range there.
OK!
I had the same comment my self :-)
:-D
the include list sounds a little big for my taste
I think some includes can be removed since others include them already. But the list is nicely alphabetical!
yes. better to ask the developer to do some cleanup... I guess some will likely raise issues at backports
like of.h and of_irq.h
not a big issue
with regards to that udelay...
I'm wandering if the better wouldn't be to call aspeed_video_irq_res_change() inside a kthread
instead of doing it at IRQ context
I don't think it is necessary. Although this could be done as a refinement later. It should be rare that a resolution change is detected give the environment these aspeed devices work in.
just wrote an e-mail about that
yeah, this could be solved later on
patches applied
Nice. Eddie will be pleased as this took longer than he probably expected.
I'm almost sure we use kthreads on other drivers that require engine resets
pinchartl: you are still reviewing the pwc patch?
ezequielg: I've replied
pinchartl: I'll make a pull request for that series
and i'll uncork the champagne
:-)