#v4l 2019-01-29,Tue

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)

WhoWhatWhen
***bingbu has quit IRC (Ping timeout: 244 seconds) [03:42]
APic has quit IRC (Ping timeout: 250 seconds) [03:50]
......................................................... (idle for 4h42mn)
_abbenormal has quit IRC (Read error: Connection reset by peer) [08:32]
.............. (idle for 1h6mn)
gnurou_mripard: sorry, from your email I cannot infer whether you think reconstructing the bitstream for hardware that requires it is a good or a bad idea :) [09:38]
mripardI think it's a bad idea
and we don't seem to have the same definition of "hardware that requires it" either :)
if we have the bitstream that has been parsed already by the userspace, and if we can operate with what has been parsed, why would we allocate a new buffer, move the slice data around and fill the rest of the buffer with the data you parsed in the very first step?
[09:43]
tfigamripard: for consistency? [09:53]
gnurou_how do you suggest we do it? keeping in mind that we want to keep things simple [09:53]
tfigaso you can have the same userspace work with different hardware [09:53]
gnurou_we could send the raw data structures to the kernel, but then you'd be dealing with variable-length data from user-space with fields that are themselves variable-length
which the kernel would have to parse for hardware that does not take the raw data structures
[09:55]
...... (idle for 26mn)
mripardthat looks like the opposite of keeping things simple
(reconstructing the bitstream)
why can't we just have different formats that would have different data?
and the list of controls that are mandatory would change from one control to the other
[10:21]
gnurou_It complicates user space considerably [10:23]
mripardbecause reconstructing the bitstream wouldn't? [10:24]
gnurou_One case is easier to manage than many [10:24]
mripardthat's not really working either
if that's truely what we believe, we would have stuck with the stateful API then
[10:25]
gnurou_It's really a matter of which way is the more painful [10:28]
mripardand if we want to support all the features that the rockchip IP has apparently [10:29]
gnurou_Manage different properties per-hardware or reconstruct part of the bitstream for some [10:29]
mripardthen we'll end up sending the whole bitstream to the kernel [10:29]
gnurou_Mmm I need to educate myself more about this ip [10:31]
mripardwhat hardware were we discussing about then? [10:32]
gnurou_All of them? :) I'm not familiar with the rockchip ip in any case
What I like about the current patch is that it keeps things in a structured way that both user space and kernel can interpret easily
[10:33]
mripardthe rockchip IP ayaka was discussing is a pretty good example then, because it pushes that question to the limit [10:37]
gnurou_Yep [10:37]
mripardbecause it can operate on binary DPB, scaling list, PPS header and Cabac table IIRC
throw the slice data into the mix, and you really end up better off just sending the whole bitstream
[10:37]
gnurou_But it needs all these elements to be presented separately and doesn't do any kind of buffer management, right? [10:38]
mripardI don't know, I guess it doesn't do the buffer management
buf for the latter I have no idea
but still
do we want, since the rockchip driver can operate that way, to force that down the throat of all drivers?
[10:39]
gnurou_well I would need to know more about the way the rockchip ip operates to answer that question
If we can make it use the stateful api, then problem solved
[10:41]
mripardeven if it's purely theoretical, I mean, that the direction you were arguing for
if that particular IP can behave the way we want to, maybe the next one won't
where do we draw the line, and what set do we want to reconstruct exactly?
[10:41]
gnurou_Yes, that's definitely something we want to consider [10:42]
mripardif we don't want to reconstruct the bitstream, then we can support odd cases as they happen [10:43]
gnurou_What I'm afraid is that we end up with a soup of controls if various granularity, which would make user space difficult to keep compatible with all the cases [10:43]
mripardwould that be so complicated? most of these data can be provided through additional controls (for things like the raw reference lists), or through having a format with more data
it's just a matter of which format and controls are supported by the driver then
just like any camera application that has no idea ahead of time what format and control and ISP the sensor is going to have
and have to discover it at runtime
[10:45]
gnurou_I have hoped that codecs would be easier to handle than cameras ;) [10:48]
mripardwell, apparently, they aren't :)
and that's also why we're merging these controls as an API that isn't public yet
so that we can change it if we want to support more hardware and it doesn't work for the m
so why not just merge the current set of APIs and figuring out hw to support those odd cases when we actually encounter them, with kernel and userspace code and some understanding of the hardware being to be supported?
[10:50]
ayakamripard, not actually, there are three decoders that rockchip would use [10:53]
gnurou_By merging, you mean in staging right? [10:55]
mripardthe driver in staging, and the UAPI is in linux/media, so not actually a uapi [10:55]
gnurou_Yeah, this will obviously take some time to clear, so at least we can try to get what we have in [11:02]
ezequielgmripard: gnurou_: what is the current status of h264 controls?
and specially format
[11:10]
gnurou_ezequielg: Maxime's patch is the latest proposal on the topic
sorry, afk for a short while
[11:12]
ezequielgmripard: oh, btw, mpv/ffmpeg has a pretty neat v4l2-request implementation.
have you seen that?
working pretty well, with gbm.
[11:13]
...... (idle for 25mn)
hverkuilmchehab: I noticed the same thing with vim2m last week. Thank you for working on it! [11:38]
mchehabanytime
it has another problem with I'm working on it right now:
it produces timeout if multiple file handlers are used
(because it uses a work queue per dev instead of per fh)
I suspect it should be trivial to fix
bbiab... need to reboot
(using the same machine for devel and desktop is painful)
[11:38]
gnurou_ayaka: where can we find details about the rockchip codecs you were talking about on the email thread?
mripard: we should sync at some point with all the data and try to decide a course of action for the long term
there has to be a way to manage this elegantly
[11:55]
...... (idle for 28mn)
mripardezequielg: I have a new version queued that I intend to send this week
gnurou_: agreed, I guess ndufresne's feedback would be valuable as well
[12:26]
ezequielgmripard: how are we tackling the start-code ?
rockchip requires the nalu start-code on the slice payload
chromeos is just adding it, but that won't cut it.
thinking in terms of va-api / ffmpeg working for both.
[12:32]
mripardthen rockchip/chromeos will deal with this when they'll get to it? [12:35]
ezequielgwhat do you mean?
i mean, now is just a good time as any to start thinking how we are gonna support codecs.
[12:36]
mripardI don't have the hardware, I don't have any understanding of the hardware, I don't have any incentive to reconstruct the bitstream and / or the NALU start code in userspace, and the API can be changed at will [12:38]
ezequielgjust as was done when we discussed the JPEG support, we tried to do that to solve all cases, not just one. [12:38]
mripardso if rockchip, chromeos or anyone want to work on this, then feel free to do so and provide suggestions [12:39]
ezequielgI will. [12:40]
mripardwe've discussed this earlier today already [12:40]
ezequielgI was asking politely if you had anything on your mind. [12:40]
mripardapparently my solution isn't practical [12:41]
ezequielgwhat? another fourcc? [12:41]
mripardyes [12:41]
ezequielgyes, that's the most direct and naive.
mchehab has rejected the headerless JPEG fourcc, and kind of convinced me of how nice it is for userspace to avoid dealing with more fourccs.
in this case, maybe it's not so bad? the difference is "add nalu start code" vs. "dont"
i mean, specifically in the h264 case.
[12:41]
hverkuilI have to add that parsing the JPEG header in kernelspace is really easy. Anything more complicated is probably not suitable to do in the kernel. [12:45]
ezequielgezequielg nods
I had discussion with ndufresne about this.
In the JPEG case, the parsing didn't really introduced any serious concerns.
[12:45]
ayakagnurou_, believe me I don't know either [12:47]
ezequielgayaka: hi!
i will post some mpeg-2 decoding patches this week (i hope).
i have them here more or less cleaned-up.
on rk3399 mpv/ffmpeg/panfrost is working well.
[12:47]
ayakaezequielg, I know, but I would post another as well [12:48]
gnurou_ayaka: uh, we have a problem then :p [12:48]
ayakaI don't know like the current one written for all request driver [12:48]
ezequielgayaka: i see.. [12:49]
ayakaezequielg, which tools do you use to verify the driver [12:49]
ezequielglike i just said, mpv+ffmpeg+panfrost [12:49]
ayakaezequielg, https://github.com/hizukiayaka/linux-kernel/tree/mpeg2_mpp_v4l2
oh, mpv, I want to skip it
is there way to use ffmpeg only?
[12:49]
ezequielgand how do you dispaly? [12:50]
ayakagnurou_, I mean there are too many threads
ezequielg, no need, display won't work at upstream
ezequielg, also I can track the result by register
I just need to know the v4l2 flow works'
I would force on improving the v4l2 core part
but not the device of rockchip, although I prefer the way I would write
[12:50]
ezequielgI really don't understand what you are trying to do :-)
but I guess we'll see the patches...
fwiw, the version I will post will work with mpv + ffmpeg (rendering with kms or gbm)
[12:51]
ayakaezequielg, the current problem is about input data or input mechanism [12:52]
ezequielgand with va-api or whatever implements the request api [12:52]
ayakaonce I verified the driver I wrote would work [12:53]
mripardezequielg: we would have to reconstruct the NALU start code [12:53]
ayakaI would move forward to the v4l2 part
besides I don't like the version device mixed with decoder and encoder
that is what I solve and post in the vendor part
ezequielg, anyway I do refer you also and the kwiboo's one, as I have forgot many about v4l2
[12:53]
tfigamripard: first of all, the potential rockchip decoders that are given as an example of problems are not used on Chrome OS
mripard: AFAICT, the ones we use (rk3288 and rk3399) would work fine with what's being proposed +/- the start code
but I believe we already figured out that we want to put annex.b slice NALU in the buffers?
[12:56]
ayakacome on, the one for chrome os is slow and ugly
I really don’t want to mention it
restruct
[12:58]
tfigaayaka: we didn't see any performance issues [12:59]
mripardtfiga: I didn't get the memo apparently, but ok [12:59]
ayakatfiga: because you don’t know [12:59]
tfigado you have any precise numbers to confirm what yo say? [13:00]
ayakayes, but I should not public it [13:00]
tfigamripard: https://patchwork.kernel.org/patch/10713675/#22439577
ayaka: aha, I have my numbers that say that it's fast and won't public it either...
[13:00]
ayakait reconstruct the bitstream although I saw some driver like code or stl do the same thing [13:00]
tfigacome on, we're expected to have a technical discussion here
so we expect facts
[13:01]
ayakatfiga because the fact I know I can’t tell you the result [13:01]
tfigawe can reach the decoding speed as advertised by the hardware
so what performance problem is there?
[13:01]
ayakanot really
I know the result of rockchip proprietary but I can say it here. but I would point out some obvious problem
I have a bunch of videos that chrome os won’t able to play
now the problem is coming
reconstruct is slow and miss leading
[13:01]
ezequielgtfiga: thanks for the link. [13:04]
ayakaproblem two, update the cabac table costs a lots of time [13:04]
tfigaayaka: how much does it take, few microseconds? [13:04]
paulk-leonovtfiga, I'm not sure the decision to include the full annex-b slice NALU really still stands after ayaka brought up that other elements might also need to be passed in binary form [13:05]
ayakatfiga it depends on the cpu frequency
I know chrome os don’t care it a lot
[13:05]
tfigaplease provide some numbers then
that's how we work here
and then we can think about optimizing
[13:05]
ayakaI won’t think any bits operation would be fast at arm [13:06]
tfigaI'm not saying there is no problem - I'm saying we don't have any evidence that there is
if you give us evidence that there is a problem
then we can fix it
[13:06]
ayakaI should a [13:07]
paulk-leonovtfiga, the general issue is that some decoders will take "parts of raw bitstream" while others will need it all parsed. I think we can all agree that parsing in the kernel is a no-go. So we can either have a way to either include parsed or unparsed depending on the need or decide to go with parsed always and do some reconstruction in the kernel -- either way the decision that applies should also apply to the slice header [13:07]
ayakaI should say the performance can achieve much higher than document
I think the chrome os never care about multiple streams problem
even a few ms would cost a lot of problem here
[13:07]
tfigawell, few ms would definitely cause a problem [13:08]
ayakabelieve me, when you are using a A7 cpu with 500MHZ [13:08]
tfigahowever it's not true that Chrome OS doesn't care about multiple streams, we have performance requirements for those [13:08]
ayakaI know, quite lower than standard [13:09]
tfigaI'm not sure what standard you refer to [13:09]
ayakathose chips sell to China market
rk3399 is very experience to China market
anyway, ayaka doesn’t standard for rockchip
ayaka is a developer for open source
[13:09]
tfigaanyway, it doesn't matter, there are tens of different projects with different requirements [13:10]
ayakabut when it comes to upstream
there would be only one interface
[13:11]
tfigamy point is, performance is something that needs to be measured and optimized based on numbers
if we don't have numbers, we can't optimize
please provide us some numbers confirming your theory and then we can work on it
[13:11]
ayakaThe big problem is not the number
it is a not flexible interface
Have you ever heard the other chip design company
do you have a idea on their decoder and encoder which is stateless as well
[13:13]
gnurou_do you have a proposal for something more flexible? [13:14]
tfigayes, I do [13:14]
ayakagnurou_: I am writing a patch call memory region
it is used to describe a memory with different regions like meta data
[13:14]
tfigaI think we could just include full NALUs and have offets point to appropriate parts of the bitstream [13:15]
ayakatfiga: it is not a good idea either
tfiga: you are a exporter at vp9 right?
[13:15]
tfigaexporter?
well, VP9 doesn't have NALU obviously
[13:16]
ayakaexporter
I mean good at
[13:16]
tfigawell, VP9 is much simpler than H264 [13:16]
ayakaexpert
tfiga: no, there is motion table
which is updated between pictures
you need to read it or the parsing of the next picture can’t be started
I don’t remember the performance or work flow I met before
I remember there is a N frame which is used for reference but not for display
[13:16]
tfigaayaka: you mean motion vectors? [13:19]
ayakayes it is
I have not read vp9 for a year, I forget the most of problem I met before
I still struggling the userspace
tfiga: will you come to FOSDEM btw
[13:19]
tfigaayaka: I'm not so sure about vp9 to be honest
but we're looking into h264 first
[13:21]
ayakatfiga it is easy, once I verified my mpeg2 dec, I can move to h264 [13:22]
tfigawe can have controls that include all the parsed information, but also full NALU bitstream with offsets
and NAL unit type
and I suppose that would work for any type of hardware
[13:22]
ayakatfiga wait a second I didn’t read you know the other vendors decoder
tfiga I want to talk more about that
[13:23]
tfigaI'm sorry, I can't tell you about other vendors
at least not yet
[13:23]
ayakaIt it is ok, I know all of
I know most of them in China market
also those Taiwanese
stateless decoder is a odd name for this kind of devices
anyway, most
anyway, most of them are just a acceleration
[13:23]
tfigastateless means that the hardware doesn't store any state internally [13:26]
ayakaso there is no way to define all the input format for them, only the most common one are possible
yes and not
[13:26]
tfigait gets everything from the driver every time a decoding job is scheduled [13:27]
ayakasome decoder and encoder with a link list mode [13:27]
tfigawhat is a link list mode? [13:27]
ayakathey do track the previous result of the previous picture
but they don’t care about the session
tfiga current driver is one shot one picture
link list mode is one shots many pictures
[13:27]
tfigaisn't it just a list of scheduled decoding jobs?
but the list is still constructed by the driver, isn't it?
[13:29]
ayakayou can configure the registers of serial pictures
but it would request less than the full mode, some of them would be filled by decoder or encoder itself based on the previous result
tfiga nope, more like you push a bunch of registers into decoder
which is can be the same sequence of decoding or not
it depends on the device capability
[13:29]
tfigabut it's the driver which pushes the registers, right?
there is no firmware that manages that
or hardware logic
and anyway, it's just a scheduling thing
the statelessness applies to decoding data
[13:33]
ayakatfiga, of course, even those driver with a firmware do the same thing [13:34]
tfigasure, there is always some state - the hardware has registers
the registers are not volatile
[13:34]
ayakatfiga, just there are another internal registers for those device with a firmware [13:34]
tfigabut our meaning of stateless is that the driver manages the registers, not hardware/firmware,
or to be precise, the driver fully controls all the state
[13:35]
ayakaI would like to say it is the driver managing the session [13:35]
tfigayes, I think "session" would also make sense [13:36]
ayakatfiga, so I think the current driver is not stateless enough which would have some buffers for cabac table else [13:36]
tfigabut the driver is not supposed to be stateless
the driver is supposed to manage the state
so from userspace point of view, the driver is stateful
[13:37]
ayakatfiga, I don't agree with you
tfiga, having a look on those vendor driver
the driver is just a common interface to hardware
I don't want there is a state in driver, but I can accept the face of current v4l2
[13:37]
tfigawell, we don't want what those vendor drivers do [13:38]
ayakatfiga, so example the current driver doesn't allow width or height changing [13:39]
tfigaV4L2 is supposed to expose an abstract functional interface [13:39]
ayakatfiga, have you thought of vp9, which is common for its [13:39]
tfigato achieve some operation [13:39]
ayakaand SVC for H.264 [13:39]
tfigain this case it's an interface for decoding video
thanks to it, we can use different hardware platforms with different userspaces
[13:39]
ayakatfiga, that is why I written https://github.com/hizukiayaka/linux-kernel/blob/mpeg2_mpp_v4l2/drivers/staging/rockchip-mpp/mpp_dev_vdpu2.c#L200
tfiga, there are more problems I can say it here, as it is a loyalty problems
[13:40]
tfigawell, the lack of ability to change the width and height is just a missing feature
which we can add if needd
[13:41]
ayakabut trust me I am not boasting on you, I need some times to make them work
I was tired of the clock tree problems of the upstream, It is not solved in rk3399 yet
that is why I didn't do any contribute for a year
[13:43]
tfigaanyway, I'm not sure what we're trying to discuss here [13:44]
ayakaand why I would choose the rk3328
I think we have talked many problems here
[13:44]
tfigaso I gave some example solution for H.264, but didn't get a reason why it wouldn't work [13:44]
ayakathe current problems of the driver in chrome os
why I know the v4l2 interface is not flexible
[13:45]
tfigalooking for proposals :)
we proposed something that works for anything that we can think of
I tried to propose something that would solve your problem, but you say it wouldn't work
so please propose something that would
for the time being we have the api in staging
we can wait with moving it out of there until we see your porposal
[13:46]
ayakasorry for bad network connection
tfiga, I am trying to push some before I came to FOSDEM and I would stay at EU for half of a month
[13:52]
tfigaayaka: okay, that would be great [13:53]
ayakaand I won't bring my computer so please wait a little longer time [13:53]
tfigain any case, that's exactly the reason we're going with the staging tree [13:53]
hverkuiltfiga: reviewed v3 of the stateful codec spec. Thank you for all your work on this! [13:54]
ayakait is pity that you won't come then we can talk more about this there [13:54]
tfigaayaka: indeed, sorry [13:55]
ayakaI really should look on this topic more easily before it goes to where it is [13:55]
hverkuilayaka: tfiga: the cedrus driver (and corresponding MPEG2 API) will remain in staging for a while: we need at least one other stateless decoder driver and ideally one stateless encoder before we will move it out of staging. [13:55]
ayakamy summary doesn't mention those problems of vp9 and the input data of the others vendor [13:56]
tfigahverkuil: thanks, looks like relatively small number of comments. I'll wait few more days and try to respin [13:56]
ayakahverkuil, oh I forget, there is some problem with MPEG-1 and D frame
which the v4l2 header is not covered
maybe some problems of the field picture
[13:56]
hverkuiltfiga: it's in good shape. Looking forward to including it in the spec. [13:57]
ayakaI wish it would be staging for a little long time [13:58]
hverkuilayaka: post this to the mailinglist. It's something for paulk-leonov to look at (not my area of expertise). [13:58]
ayakabut I think nobody would use mpeg-1 in this data
it is ok for paulk-leonov I would meet him in a few days
[13:58]
hverkuilah, he's at fosdem as well?
still, just post such things to the mailinglist. Then others can comment on it as well.
[13:59]
ayakayes, I knew him years before [14:00]
paulk-leonovhverkuil, yes I'll be around [14:00]
...... (idle for 26mn)
ayakaanyone have a idea on how will ffmpeg work with v4l2 request
ffmpeg -i ~/videos/19sintel_mpeg2.mpg -hwaccel drm -hwaccel_device /dev/dri/card0 -v verbose /dev/null
would always tell me "Option hwaccel (use HW accelerated decoding) cannot be applied to output url /dev/null -- you are trying to apply an input option to an output file or vice versa. Move this option before the file it belongs to."
[14:26]
ffmpeg -hwaccel drm -hwaccel_device /dev/dri/card0 -v verbose -i ~/videos/19sintel_mpeg2.mpg -f nv12 /dev/null would becomes "Requested output format 'nv12' is not a suitable output format" [14:32]
.......... (idle for 47mn)
***hverkuil has quit IRC (Quit: ZNC 1.7.1+deb2+b3 - https://znc.in) [15:19]
......... (idle for 44mn)
mchehabhverkuil: sent another patch series with includes the first vim2m patch... feel free to review and test
it should make vim2m useable
[16:03]
hverkuilI'll try to test this tomorrow. [16:03]
mchehabok
(it doesn't solve a serialization issue inside v4l2-mem2mem - not sure if it would be worth to touch it - probably not)
ah, if you use the same format as input/output, you need either to use a gstream devel or apply a patch to it in order to allow it to disable an internal passthrough mode
as gst just ignores (by default) the data at the output buffer if it has the same format as the capture buffer
so, either you patch gstream or use a pipeline with conversion, like:
$ gst-launch-1.0 videotestsrc ! video/x-raw,format=YUY2 ! v4l2video0convert extra-controls="s,horizontal_flip=1,vertical_flip=1" ! video/x-raw,format=RGB16 ! videoconvert ! ximagesink
(that forces capture buffer to be YUYV and output buffer to be RGB565 LE)
that's the patch needed if formats are equal:
https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/commit/fe5236be8771ea82c850ebebe19cf1064d112bf0

plus, v4l2video0convert needs this parameter: disable-passthrough=1
[16:03]
hverkuilI'll be using v4l2-ctl & qvidcap. [16:11]
mchehab(this tip was given to me by ndufresne) [16:11]
hverkuiland test with v4l2-compliance. [16:12]
mchehabit passes v4l2-compliance
(except for the lack of request API)
it is now saying it fails by not implementing it
fail: v4l2-test-buffers.cpp(1603): doioctl_fd(media_fd, MEDIA_IOC_REQUEST_ALLOC, &req_fd)
(imo, a problem at the tool, as this shouldn't be mandatory)
[16:12]
hverkuilIs the request API disabled in your .config? [16:13]
mchehabprobably
$ grep REQ .config
# CONFIG_MEDIA_CONTROLLER_REQUEST_API is not set
yes
[16:13]
hverkuilI'll test that tomorrow as well. [16:14]
ndufresnemchehab, you just made me realize, I should probably auto-disable passthrough if there is extra-controls [16:15]
hverkuilactually, I'll test that now since this should work. [16:15]
mchehabndufresne: yes, I think so [16:15]
ndufresnendufresne filing an issue
I simply didn't think about this use case before, but many csc and scaler have flipping support
[16:15]
mchehabndufresne: to be frank, I don't see much sense on enabling passthrough myself
I mean, if there's a m2m device at a pipeline, it should be doing something :-p
[16:16]
ndufresnewhen you strictly use it as a converter / scaler, it make sense [16:17]
mchehabeither cropping/scaling/format conversion/image change due to some control/... [16:17]
ndufresneas it saves on memory bandwidth [16:17]
mchehabyes, but if it a hardware converter, it is probably more efficient than a gst software implementation
it makes sense to be able to enable a passthrough mode
[16:19]
ndufresnesure, but let's say you have a "generic" app that has no idea what a capture support
to make sure it works, you would program your pipeline with
v4l2src ! v4l2convert ! kmssink
If you can zero-copy directly from v4l2src to kmssink, the passthrough will same you on bandwidth
[16:19]
mchehabI see [16:20]
ndufresnethat's why an explicit control like disable-passthrough was required [16:21]
mchehaban explicit way to change it makes sense [16:21]
ndufresnemaybe it should have been the opposite (enable-passthrough), but I can't change it anymore, it's released now [16:21]
mchehab(i would just do the reverse, e.g. use something like enable-passtrough)
yeah, changing this is problematic
I see your point
[16:22]
ndufresneit's always the first user use case that wins with these things [16:22]
mchehabwhat worries me is that, if you hadn't told me about that, I would have assumed that vim2m were ok
if fmt_in == fmt_out
as I had this misconception, others might have the same
[16:23]
ndufresneyes, I got tricked plenty of times too
gstreamer is application/use-case centric, it is of course good for a lot of testing (because it allow using combinations that apps may never use), but it's not dedicated for testing the kernel
[16:25]
mchehabyes, I know [16:26]
ndufresnebut for m2m devices, like the Exynos FIMC and GScaler, it's been a great tool to fix all the stride/width/height corner case [16:27]
mchehabI usually prefer using qv4l2 for testing, but it doesn't work properly for m2m [16:27]
ndufresnendufresne also uses qv4l2 all the time [16:27]
***benjiG has left [16:27]
mchehabhverkuil: btw, how do you test m2m with v4l2-ctl and qvidcap? [16:28]
hverkuilyou can use v4l2-ctl and stream the captured frames to qvidcap with --stream-to-host [16:28]
ndufresneThe shaders are a little buggy though for I420 and NV12 [16:28]
hverkuiland run qvidcap with the -p option [16:28]
mchehabwill it use the same file handler on both apps? [16:28]
hverkuilv4l2-ctl uses the same filehandle, yes (qvidcap just receives the video frames over a socket, it doesn't use the video device) [16:29]
mchehabah [16:29]
ndufresnehverkuil, the --help would win having some examples command I believe, it's not exactly obvious first read [16:29]
mchehab--help-all of v4l2-ctl is, IMHO, very hard to read [16:30]
hverkuilI never use --help-all [16:30]
ndufresnethat one needs a pager, I alway | less -RS and do searches [16:30]
mchehabI never find anything without --help-all [16:30]
hverkuilI use --help, then select the --help-foo that I actually need.
--help-vidcap or --help-streaming
for example
[16:30]
mchehabin this specific case, it should be --help-m2m :-p [16:31]
hverkuilit could be a shorthand for --help-vidcap and --help-vidout [16:31]
ndufresnethis one seems like an easy enhancement
hverkuil, so are you able to test codecs driver with qvidcap ? I didn't know about this tool to be honnest
[16:32]
hverkuilyes, I use qvidcap for that. [16:33]
ndufresneif you haven't yet, sounds like something to blog about
ndufresne hope ezequielg know
[16:33]
mchehabit would be interesting if qvidcap can test RGB565BE
I was unable to test this one with gst
[16:34]
hverkuilI.e. you can use v4l2-ctl to decode a bitstream and stream the raw video to qvidcap over a socket. [16:34]
ndufresneso that's limited to mmap io mode then ? [16:35]
hverkuilqvidcap is basically just the opengl viewer part of qv4l2, plus a socket interface. [16:35]
mchehabhverkuil: how can I use v4l2-ctl to set capture format to YUY2 and output format to RGBR? [16:36]
hverkuilYes, no zero copy at all. [16:36]
ndufresneok, but we could add DMABuf passing over a socket (both ways), could be a nice project [16:36]
mchehabah, -x [16:36]
ndufresneand then a much smaller code base when to reproduce issues found in bigger software like chrome [16:37]
hverkuilv4l2-ctl -v pixelformat=capturefourcc -x pixelformat=outputfourcc --stream-mmap --stream-out-mmap --stream-to-host localhost
and in a separate shell: qvidcap -p
(hope I got this right, it's from memory)
mchehab: pushed a v4l2-compliance fix for when the request API is disabled in the kernel config.
[16:37]
mchehabok, thanks
I suspect you need something like this too:
--stream-from foo.raw
(and use something like vivid to generate a foo.raw)
something like (for vivid as /dev/video1)
$ v4l2-ctl -d /dev/video1 --stream-count 100 --stream-mmap --stream-to foo.raw -v width=640,height=640,pixelformat=YUYV
didn't work:
$ v4l2-ctl -v width=640,height=480,pixelformat=YUYV -x pixelformat=RGBR --stream-from foo.raw --stream-mmap --stream-out-mmap --stream-to-host localhost
--stream-to-host or --stream-from-host not supported for m2m devices
[16:40]
hverkuil: it didn't enable request API because it depends on STAGING_MEDIA
(I'm building it using media-build)
recompiling with staging and request API enabled
(with it disabled, v4l2-compliance passed)
after your patch
bbiab
back
fail: v4l2-test-buffers.cpp(1755): buf.qbuf(node)
test Requests: FAIL
(with request api enabled)
Total for vim2m device /dev/video0: 45, Succeeded: 44, Failed: 1, Warnings: 0
[16:52]
........ (idle for 37mn)
hverkuilThe remaining vim2m fail should go away once this PR is merged: https://patchwork.linuxtv.org/patch/54201/
(should have mentioned that I'm testing with this PR)
[17:35]
mchehab: that's probably not correct what I wrote above since you are testing without the Request API enabled.
No, I'm correct. I saw that for that test you enabled the Request API, and without the PR it will indeed fail.
[17:45]
mchehabyes, it is now enabled with request API
I was not able to test with v4l2-ctl, though
(14:45:36) mchehab: --stream-to-host or --stream-from-host not supported for m2m devices
(14:45:36) mchehab: $ v4l2-ctl -v width=640,height=480,pixelformat=YUYV -x pixelformat=RGBR --stream-from foo.raw --stream-mmap --stream-out-mmap --stream-to-host localhost
(I suspect that your PR won't affect this)
with your PR it passes
at v4l2-compliance
test Requests: OK
Total for vim2m device /dev/video0: 45, Succeeded: 45, Failed: 0, Warnings: 0
[17:50]
hverkuilI suspect a recent change broke v4l2-ctl for m2m devices. Will check tomorrow. [17:56]
ezequielgndufresne: hverkuil: i use gstreamer to test [17:59]
ayakaok I finally make a mpv for v4l2 request test
but I look like the v4l2 device is not called
[18:00]
hverkuilv4l2-ctl --stream-mmap --stream-out-mmap now works again (pushed the fix)
But I was mistaken about --stream-to-host: it appears I didn't add support for that for m2m devices. Not sure why not, I'll see if I can add that tomorrow.
[18:01]
mchehabyou can test with:
$ gst-launch-1.0 filesrc location=some_file.mp4 ! decodebin ! videoconvert ! video/x-raw,format=RGB ! v4l2video0convert disable-passthrough=1 extra-controls="s,horizontal_flip=1,vertical_flip=1" ! video/x-raw,format=RGB16 ! videoconvert ! ximagesink
or
$ gst-launch-1.0 videotestsrc ! video/x-raw,format=BGR ! v4l2video0convert disable-passthrough=1 extra-cont
rols="s,horizontal_flip=1,vertical_flip=1" ! video/x-raw,format=YUY2 ! videoconvert ! ximagesink
(you may remove the disable-passthrough - when formats are different, gst does the right thing)
hverkuil: if you have time, it would be good if qv4l2 could also work with m2m
right now, it seems that it handles like an output device or something
(not a priority, but it would be good if it could, at least, warn that m2m is not supported)
hverkuil: btw, I will likely apply my vim2m patches and your PR tomorrow, if nobody complains
I'm tending to add a c/c stable as, on its current state, IMHO vim2m is broken
(still it is a big patch... not 100% sure abou that)
[18:06]
....... (idle for 31mn)
ezequielggood to see the disable-passthrough was a good idea! [18:41]
.......... (idle for 47mn)
ayakaezequielg, you are using the ffmpeg to verify the rockchip's driver right?
but I found it doesn't support multiple planes
so all you are only use the capture/output not capture_mplane/output_mplane?
[19:28]
...... (idle for 26mn)
ezequielgayaka: use https://github.com/Kwiboo/FFmpeg/tree/v4l2-request-hwaccel maybe [19:55]
ayakaezequielg, ok the same one I used
that version doesn't support multiplanes, I jsut kwiboo
I just ask kwiboo
[19:55]
ezequielgit doesn't?
it should support MPLANE because my driver is MPLANE and it's working :-)
[19:57]
hverkuilmchehab: added support for --stream-to-host for m2m devices in v4l2-ctl
example: v4l2-ctl --stream-mmap --stream-out-mmap --stream-to-host localhost --stream-out-hor-speed 1
(note: without a --stream-from option v4l2-ctl will use the test pattern generator to generate an image)
[20:05]
ayakaezequielg, strange, it would set a NV12 not NV12M format which is hard coded
ezequielg, may you have a tried on "ffmpeg -hwaccel drm -hwaccel_device /dev/dri/card0 -v trace -i videos/19sintel_mpeg2.mpg -an -f null -"
[20:15]
ezequielgayaka: i am currently busy with other setup
but i will post the patch and hopefully some instructions to build mpv and ffmpeg
[20:19]
ayakaezequielg, it is ok, I would switch to v4l2 request test from bootlin
I want to get rid of userspace part and finish my verification as soon as poosible
[20:21]
........... (idle for 51mn)
huangdHello, are there any plans to bring higher bit per channel formats into videodev2.h? We're planning to include 10-bit support on our decoder, and wanted to see if there are plans, or what the reception would be to patches along those lines. Otherwise is it more desirable for us to create new reserved image formats instead of public ones? [21:13]
Kwibooezequielg: the ffmpeg hwaccel supports basic mplane + NV12, just not NV12M that ayaka is using, I will work on adding mplane pixelformat support in hwaccel (hard to verify without a driver using mplane pixelformats) [21:15]
ezequielgah. [21:17]

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)