↑back Search ←Prev date Next date→ Show only urls | (Click on time to select a line by its url) |
Who | What | When | |
---|---|---|---|
*** | bingbu has quit IRC (Ping timeout: 244 seconds) | [03:42] | |
APic has quit IRC (Ping timeout: 250 seconds) | [03:50] | ||
......................................................... (idle for 4h42mn) | |||
_abbenormal has quit IRC (Read error: Connection reset by peer) | [08:32] | ||
.............. (idle for 1h6mn) | |||
gnurou_ | mripard: sorry, from your email I cannot infer whether you think reconstructing the bitstream for hardware that requires it is a good or a bad idea :) | [09:38] | |
mripard | I think it's a bad idea
and we don't seem to have the same definition of "hardware that requires it" either :) if we have the bitstream that has been parsed already by the userspace, and if we can operate with what has been parsed, why would we allocate a new buffer, move the slice data around and fill the rest of the buffer with the data you parsed in the very first step? | [09:43] | |
tfiga | mripard: for consistency? | [09:53] | |
gnurou_ | how do you suggest we do it? keeping in mind that we want to keep things simple | [09:53] | |
tfiga | so you can have the same userspace work with different hardware | [09:53] | |
gnurou_ | we could send the raw data structures to the kernel, but then you'd be dealing with variable-length data from user-space with fields that are themselves variable-length
which the kernel would have to parse for hardware that does not take the raw data structures | [09:55] | |
...... (idle for 26mn) | |||
mripard | that looks like the opposite of keeping things simple
(reconstructing the bitstream) why can't we just have different formats that would have different data? and the list of controls that are mandatory would change from one control to the other | [10:21] | |
gnurou_ | It complicates user space considerably | [10:23] | |
mripard | because reconstructing the bitstream wouldn't? | [10:24] | |
gnurou_ | One case is easier to manage than many | [10:24] | |
mripard | that's not really working either
if that's truely what we believe, we would have stuck with the stateful API then | [10:25] | |
gnurou_ | It's really a matter of which way is the more painful | [10:28] | |
mripard | and if we want to support all the features that the rockchip IP has apparently | [10:29] | |
gnurou_ | Manage different properties per-hardware or reconstruct part of the bitstream for some | [10:29] | |
mripard | then we'll end up sending the whole bitstream to the kernel | [10:29] | |
gnurou_ | Mmm I need to educate myself more about this ip | [10:31] | |
mripard | what hardware were we discussing about then? | [10:32] | |
gnurou_ | All of them? :) I'm not familiar with the rockchip ip in any case
What I like about the current patch is that it keeps things in a structured way that both user space and kernel can interpret easily | [10:33] | |
mripard | the rockchip IP ayaka was discussing is a pretty good example then, because it pushes that question to the limit | [10:37] | |
gnurou_ | Yep | [10:37] | |
mripard | because it can operate on binary DPB, scaling list, PPS header and Cabac table IIRC
throw the slice data into the mix, and you really end up better off just sending the whole bitstream | [10:37] | |
gnurou_ | But it needs all these elements to be presented separately and doesn't do any kind of buffer management, right? | [10:38] | |
mripard | I don't know, I guess it doesn't do the buffer management
buf for the latter I have no idea but still do we want, since the rockchip driver can operate that way, to force that down the throat of all drivers? | [10:39] | |
gnurou_ | well I would need to know more about the way the rockchip ip operates to answer that question
If we can make it use the stateful api, then problem solved | [10:41] | |
mripard | even if it's purely theoretical, I mean, that the direction you were arguing for
if that particular IP can behave the way we want to, maybe the next one won't where do we draw the line, and what set do we want to reconstruct exactly? | [10:41] | |
gnurou_ | Yes, that's definitely something we want to consider | [10:42] | |
mripard | if we don't want to reconstruct the bitstream, then we can support odd cases as they happen | [10:43] | |
gnurou_ | What I'm afraid is that we end up with a soup of controls if various granularity, which would make user space difficult to keep compatible with all the cases | [10:43] | |
mripard | would that be so complicated? most of these data can be provided through additional controls (for things like the raw reference lists), or through having a format with more data
it's just a matter of which format and controls are supported by the driver then just like any camera application that has no idea ahead of time what format and control and ISP the sensor is going to have and have to discover it at runtime | [10:45] | |
gnurou_ | I have hoped that codecs would be easier to handle than cameras ;) | [10:48] | |
mripard | well, apparently, they aren't :)
and that's also why we're merging these controls as an API that isn't public yet so that we can change it if we want to support more hardware and it doesn't work for the m so why not just merge the current set of APIs and figuring out hw to support those odd cases when we actually encounter them, with kernel and userspace code and some understanding of the hardware being to be supported? | [10:50] | |
ayaka | mripard, not actually, there are three decoders that rockchip would use | [10:53] | |
gnurou_ | By merging, you mean in staging right? | [10:55] | |
mripard | the driver in staging, and the UAPI is in linux/media, so not actually a uapi | [10:55] | |
gnurou_ | Yeah, this will obviously take some time to clear, so at least we can try to get what we have in | [11:02] | |
ezequielg | mripard: gnurou_: what is the current status of h264 controls?
and specially format | [11:10] | |
gnurou_ | ezequielg: Maxime's patch is the latest proposal on the topic
sorry, afk for a short while | [11:12] | |
ezequielg | mripard: oh, btw, mpv/ffmpeg has a pretty neat v4l2-request implementation.
have you seen that? working pretty well, with gbm. | [11:13] | |
...... (idle for 25mn) | |||
hverkuil | mchehab: I noticed the same thing with vim2m last week. Thank you for working on it! | [11:38] | |
mchehab | anytime
it has another problem with I'm working on it right now: it produces timeout if multiple file handlers are used (because it uses a work queue per dev instead of per fh) I suspect it should be trivial to fix bbiab... need to reboot (using the same machine for devel and desktop is painful) | [11:38] | |
gnurou_ | ayaka: where can we find details about the rockchip codecs you were talking about on the email thread?
mripard: we should sync at some point with all the data and try to decide a course of action for the long term there has to be a way to manage this elegantly | [11:55] | |
...... (idle for 28mn) | |||
mripard | ezequielg: I have a new version queued that I intend to send this week
gnurou_: agreed, I guess ndufresne's feedback would be valuable as well | [12:26] | |
ezequielg | mripard: how are we tackling the start-code ?
rockchip requires the nalu start-code on the slice payload chromeos is just adding it, but that won't cut it. thinking in terms of va-api / ffmpeg working for both. | [12:32] | |
mripard | then rockchip/chromeos will deal with this when they'll get to it? | [12:35] | |
ezequielg | what do you mean?
i mean, now is just a good time as any to start thinking how we are gonna support codecs. | [12:36] | |
mripard | I don't have the hardware, I don't have any understanding of the hardware, I don't have any incentive to reconstruct the bitstream and / or the NALU start code in userspace, and the API can be changed at will | [12:38] | |
ezequielg | just as was done when we discussed the JPEG support, we tried to do that to solve all cases, not just one. | [12:38] | |
mripard | so if rockchip, chromeos or anyone want to work on this, then feel free to do so and provide suggestions | [12:39] | |
ezequielg | I will. | [12:40] | |
mripard | we've discussed this earlier today already | [12:40] | |
ezequielg | I was asking politely if you had anything on your mind. | [12:40] | |
mripard | apparently my solution isn't practical | [12:41] | |
ezequielg | what? another fourcc? | [12:41] | |
mripard | yes | [12:41] | |
ezequielg | yes, that's the most direct and naive.
mchehab has rejected the headerless JPEG fourcc, and kind of convinced me of how nice it is for userspace to avoid dealing with more fourccs. in this case, maybe it's not so bad? the difference is "add nalu start code" vs. "dont" i mean, specifically in the h264 case. | [12:41] | |
hverkuil | I have to add that parsing the JPEG header in kernelspace is really easy. Anything more complicated is probably not suitable to do in the kernel. | [12:45] | |
ezequielg | ezequielg nods
I had discussion with ndufresne about this. In the JPEG case, the parsing didn't really introduced any serious concerns. | [12:45] | |
ayaka | gnurou_, believe me I don't know either | [12:47] | |
ezequielg | ayaka: hi!
i will post some mpeg-2 decoding patches this week (i hope). i have them here more or less cleaned-up. on rk3399 mpv/ffmpeg/panfrost is working well. | [12:47] | |
ayaka | ezequielg, I know, but I would post another as well | [12:48] | |
gnurou_ | ayaka: uh, we have a problem then :p | [12:48] | |
ayaka | I don't know like the current one written for all request driver | [12:48] | |
ezequielg | ayaka: i see.. | [12:49] | |
ayaka | ezequielg, which tools do you use to verify the driver | [12:49] | |
ezequielg | like i just said, mpv+ffmpeg+panfrost | [12:49] | |
ayaka | ezequielg, https://github.com/hizukiayaka/linux-kernel/tree/mpeg2_mpp_v4l2
oh, mpv, I want to skip it is there way to use ffmpeg only? | [12:49] | |
ezequielg | and how do you dispaly? | [12:50] | |
ayaka | gnurou_, I mean there are too many threads
ezequielg, no need, display won't work at upstream ezequielg, also I can track the result by register I just need to know the v4l2 flow works' I would force on improving the v4l2 core part but not the device of rockchip, although I prefer the way I would write | [12:50] | |
ezequielg | I really don't understand what you are trying to do :-)
but I guess we'll see the patches... fwiw, the version I will post will work with mpv + ffmpeg (rendering with kms or gbm) | [12:51] | |
ayaka | ezequielg, the current problem is about input data or input mechanism | [12:52] | |
ezequielg | and with va-api or whatever implements the request api | [12:52] | |
ayaka | once I verified the driver I wrote would work | [12:53] | |
mripard | ezequielg: we would have to reconstruct the NALU start code | [12:53] | |
ayaka | I would move forward to the v4l2 part
besides I don't like the version device mixed with decoder and encoder that is what I solve and post in the vendor part ezequielg, anyway I do refer you also and the kwiboo's one, as I have forgot many about v4l2 | [12:53] | |
tfiga | mripard: first of all, the potential rockchip decoders that are given as an example of problems are not used on Chrome OS
mripard: AFAICT, the ones we use (rk3288 and rk3399) would work fine with what's being proposed +/- the start code but I believe we already figured out that we want to put annex.b slice NALU in the buffers? | [12:56] | |
ayaka | come on, the one for chrome os is slow and ugly
I really don’t want to mention it restruct | [12:58] | |
tfiga | ayaka: we didn't see any performance issues | [12:59] | |
mripard | tfiga: I didn't get the memo apparently, but ok | [12:59] | |
ayaka | tfiga: because you don’t know | [12:59] | |
tfiga | do you have any precise numbers to confirm what yo say? | [13:00] | |
ayaka | yes, but I should not public it | [13:00] | |
tfiga | mripard: https://patchwork.kernel.org/patch/10713675/#22439577
ayaka: aha, I have my numbers that say that it's fast and won't public it either... | [13:00] | |
ayaka | it reconstruct the bitstream although I saw some driver like code or stl do the same thing | [13:00] | |
tfiga | come on, we're expected to have a technical discussion here
so we expect facts | [13:01] | |
ayaka | tfiga because the fact I know I can’t tell you the result | [13:01] | |
tfiga | we can reach the decoding speed as advertised by the hardware
so what performance problem is there? | [13:01] | |
ayaka | not really
I know the result of rockchip proprietary but I can say it here. but I would point out some obvious problem I have a bunch of videos that chrome os won’t able to play now the problem is coming reconstruct is slow and miss leading | [13:01] | |
ezequielg | tfiga: thanks for the link. | [13:04] | |
ayaka | problem two, update the cabac table costs a lots of time | [13:04] | |
tfiga | ayaka: how much does it take, few microseconds? | [13:04] | |
paulk-leonov | tfiga, I'm not sure the decision to include the full annex-b slice NALU really still stands after ayaka brought up that other elements might also need to be passed in binary form | [13:05] | |
ayaka | tfiga it depends on the cpu frequency
I know chrome os don’t care it a lot | [13:05] | |
tfiga | please provide some numbers then
that's how we work here and then we can think about optimizing | [13:05] | |
ayaka | I won’t think any bits operation would be fast at arm | [13:06] | |
tfiga | I'm not saying there is no problem - I'm saying we don't have any evidence that there is
if you give us evidence that there is a problem then we can fix it | [13:06] | |
ayaka | I should a | [13:07] | |
paulk-leonov | tfiga, the general issue is that some decoders will take "parts of raw bitstream" while others will need it all parsed. I think we can all agree that parsing in the kernel is a no-go. So we can either have a way to either include parsed or unparsed depending on the need or decide to go with parsed always and do some reconstruction in the kernel -- either way the decision that applies should also apply to the slice header | [13:07] | |
ayaka | I should say the performance can achieve much higher than document
I think the chrome os never care about multiple streams problem even a few ms would cost a lot of problem here | [13:07] | |
tfiga | well, few ms would definitely cause a problem | [13:08] | |
ayaka | believe me, when you are using a A7 cpu with 500MHZ | [13:08] | |
tfiga | however it's not true that Chrome OS doesn't care about multiple streams, we have performance requirements for those | [13:08] | |
ayaka | I know, quite lower than standard | [13:09] | |
tfiga | I'm not sure what standard you refer to | [13:09] | |
ayaka | those chips sell to China market
rk3399 is very experience to China market anyway, ayaka doesn’t standard for rockchip ayaka is a developer for open source | [13:09] | |
tfiga | anyway, it doesn't matter, there are tens of different projects with different requirements | [13:10] | |
ayaka | but when it comes to upstream
there would be only one interface | [13:11] | |
tfiga | my point is, performance is something that needs to be measured and optimized based on numbers
if we don't have numbers, we can't optimize please provide us some numbers confirming your theory and then we can work on it | [13:11] | |
ayaka | The big problem is not the number
it is a not flexible interface Have you ever heard the other chip design company do you have a idea on their decoder and encoder which is stateless as well | [13:13] | |
gnurou_ | do you have a proposal for something more flexible? | [13:14] | |
tfiga | yes, I do | [13:14] | |
ayaka | gnurou_: I am writing a patch call memory region
it is used to describe a memory with different regions like meta data | [13:14] | |
tfiga | I think we could just include full NALUs and have offets point to appropriate parts of the bitstream | [13:15] | |
ayaka | tfiga: it is not a good idea either
tfiga: you are a exporter at vp9 right? | [13:15] | |
tfiga | exporter?
well, VP9 doesn't have NALU obviously | [13:16] | |
ayaka | exporter
I mean good at | [13:16] | |
tfiga | well, VP9 is much simpler than H264 | [13:16] | |
ayaka | expert
tfiga: no, there is motion table which is updated between pictures you need to read it or the parsing of the next picture can’t be started I don’t remember the performance or work flow I met before I remember there is a N frame which is used for reference but not for display | [13:16] | |
tfiga | ayaka: you mean motion vectors? | [13:19] | |
ayaka | yes it is
I have not read vp9 for a year, I forget the most of problem I met before I still struggling the userspace tfiga: will you come to FOSDEM btw | [13:19] | |
tfiga | ayaka: I'm not so sure about vp9 to be honest
but we're looking into h264 first | [13:21] | |
ayaka | tfiga it is easy, once I verified my mpeg2 dec, I can move to h264 | [13:22] | |
tfiga | we can have controls that include all the parsed information, but also full NALU bitstream with offsets
and NAL unit type and I suppose that would work for any type of hardware | [13:22] | |
ayaka | tfiga wait a second I didn’t read you know the other vendors decoder
tfiga I want to talk more about that | [13:23] | |
tfiga | I'm sorry, I can't tell you about other vendors
at least not yet | [13:23] | |
ayaka | It it is ok, I know all of
I know most of them in China market also those Taiwanese stateless decoder is a odd name for this kind of devices anyway, most anyway, most of them are just a acceleration | [13:23] | |
tfiga | stateless means that the hardware doesn't store any state internally | [13:26] | |
ayaka | so there is no way to define all the input format for them, only the most common one are possible
yes and not | [13:26] | |
tfiga | it gets everything from the driver every time a decoding job is scheduled | [13:27] | |
ayaka | some decoder and encoder with a link list mode | [13:27] | |
tfiga | what is a link list mode? | [13:27] | |
ayaka | they do track the previous result of the previous picture
but they don’t care about the session tfiga current driver is one shot one picture link list mode is one shots many pictures | [13:27] | |
tfiga | isn't it just a list of scheduled decoding jobs?
but the list is still constructed by the driver, isn't it? | [13:29] | |
ayaka | you can configure the registers of serial pictures
but it would request less than the full mode, some of them would be filled by decoder or encoder itself based on the previous result tfiga nope, more like you push a bunch of registers into decoder which is can be the same sequence of decoding or not it depends on the device capability | [13:29] | |
tfiga | but it's the driver which pushes the registers, right?
there is no firmware that manages that or hardware logic and anyway, it's just a scheduling thing the statelessness applies to decoding data | [13:33] | |
ayaka | tfiga, of course, even those driver with a firmware do the same thing | [13:34] | |
tfiga | sure, there is always some state - the hardware has registers
the registers are not volatile | [13:34] | |
ayaka | tfiga, just there are another internal registers for those device with a firmware | [13:34] | |
tfiga | but our meaning of stateless is that the driver manages the registers, not hardware/firmware,
or to be precise, the driver fully controls all the state | [13:35] | |
ayaka | I would like to say it is the driver managing the session | [13:35] | |
tfiga | yes, I think "session" would also make sense | [13:36] | |
ayaka | tfiga, so I think the current driver is not stateless enough which would have some buffers for cabac table else | [13:36] | |
tfiga | but the driver is not supposed to be stateless
the driver is supposed to manage the state so from userspace point of view, the driver is stateful | [13:37] | |
ayaka | tfiga, I don't agree with you
tfiga, having a look on those vendor driver the driver is just a common interface to hardware I don't want there is a state in driver, but I can accept the face of current v4l2 | [13:37] | |
tfiga | well, we don't want what those vendor drivers do | [13:38] | |
ayaka | tfiga, so example the current driver doesn't allow width or height changing | [13:39] | |
tfiga | V4L2 is supposed to expose an abstract functional interface | [13:39] | |
ayaka | tfiga, have you thought of vp9, which is common for its | [13:39] | |
tfiga | to achieve some operation | [13:39] | |
ayaka | and SVC for H.264 | [13:39] | |
tfiga | in this case it's an interface for decoding video
thanks to it, we can use different hardware platforms with different userspaces | [13:39] | |
ayaka | tfiga, that is why I written https://github.com/hizukiayaka/linux-kernel/blob/mpeg2_mpp_v4l2/drivers/staging/rockchip-mpp/mpp_dev_vdpu2.c#L200
tfiga, there are more problems I can say it here, as it is a loyalty problems | [13:40] | |
tfiga | well, the lack of ability to change the width and height is just a missing feature
which we can add if needd | [13:41] | |
ayaka | but trust me I am not boasting on you, I need some times to make them work
I was tired of the clock tree problems of the upstream, It is not solved in rk3399 yet that is why I didn't do any contribute for a year | [13:43] | |
tfiga | anyway, I'm not sure what we're trying to discuss here | [13:44] | |
ayaka | and why I would choose the rk3328
I think we have talked many problems here | [13:44] | |
tfiga | so I gave some example solution for H.264, but didn't get a reason why it wouldn't work | [13:44] | |
ayaka | the current problems of the driver in chrome os
why I know the v4l2 interface is not flexible | [13:45] | |
tfiga | looking for proposals :)
we proposed something that works for anything that we can think of I tried to propose something that would solve your problem, but you say it wouldn't work so please propose something that would for the time being we have the api in staging we can wait with moving it out of there until we see your porposal | [13:46] | |
ayaka | sorry for bad network connection
tfiga, I am trying to push some before I came to FOSDEM and I would stay at EU for half of a month | [13:52] | |
tfiga | ayaka: okay, that would be great | [13:53] | |
ayaka | and I won't bring my computer so please wait a little longer time | [13:53] | |
tfiga | in any case, that's exactly the reason we're going with the staging tree | [13:53] | |
hverkuil | tfiga: reviewed v3 of the stateful codec spec. Thank you for all your work on this! | [13:54] | |
ayaka | it is pity that you won't come then we can talk more about this there | [13:54] | |
tfiga | ayaka: indeed, sorry | [13:55] | |
ayaka | I really should look on this topic more easily before it goes to where it is | [13:55] | |
hverkuil | ayaka: tfiga: the cedrus driver (and corresponding MPEG2 API) will remain in staging for a while: we need at least one other stateless decoder driver and ideally one stateless encoder before we will move it out of staging. | [13:55] | |
ayaka | my summary doesn't mention those problems of vp9 and the input data of the others vendor | [13:56] | |
tfiga | hverkuil: thanks, looks like relatively small number of comments. I'll wait few more days and try to respin | [13:56] | |
ayaka | hverkuil, oh I forget, there is some problem with MPEG-1 and D frame
which the v4l2 header is not covered maybe some problems of the field picture | [13:56] | |
hverkuil | tfiga: it's in good shape. Looking forward to including it in the spec. | [13:57] | |
ayaka | I wish it would be staging for a little long time | [13:58] | |
hverkuil | ayaka: post this to the mailinglist. It's something for paulk-leonov to look at (not my area of expertise). | [13:58] | |
ayaka | but I think nobody would use mpeg-1 in this data
it is ok for paulk-leonov I would meet him in a few days | [13:58] | |
hverkuil | ah, he's at fosdem as well?
still, just post such things to the mailinglist. Then others can comment on it as well. | [13:59] | |
ayaka | yes, I knew him years before | [14:00] | |
paulk-leonov | hverkuil, yes I'll be around | [14:00] | |
...... (idle for 26mn) | |||
ayaka | anyone have a idea on how will ffmpeg work with v4l2 request
ffmpeg -i ~/videos/19sintel_mpeg2.mpg -hwaccel drm -hwaccel_device /dev/dri/card0 -v verbose /dev/null would always tell me "Option hwaccel (use HW accelerated decoding) cannot be applied to output url /dev/null -- you are trying to apply an input option to an output file or vice versa. Move this option before the file it belongs to." | [14:26] | |
ffmpeg -hwaccel drm -hwaccel_device /dev/dri/card0 -v verbose -i ~/videos/19sintel_mpeg2.mpg -f nv12 /dev/null would becomes "Requested output format 'nv12' is not a suitable output format" | [14:32] | ||
.......... (idle for 47mn) | |||
*** | hverkuil has quit IRC (Quit: ZNC 1.7.1+deb2+b3 - https://znc.in) | [15:19] | |
......... (idle for 44mn) | |||
mchehab | hverkuil: sent another patch series with includes the first vim2m patch... feel free to review and test
it should make vim2m useable | [16:03] | |
hverkuil | I'll try to test this tomorrow. | [16:03] | |
mchehab | ok
(it doesn't solve a serialization issue inside v4l2-mem2mem - not sure if it would be worth to touch it - probably not) ah, if you use the same format as input/output, you need either to use a gstream devel or apply a patch to it in order to allow it to disable an internal passthrough mode as gst just ignores (by default) the data at the output buffer if it has the same format as the capture buffer so, either you patch gstream or use a pipeline with conversion, like: $ gst-launch-1.0 videotestsrc ! video/x-raw,format=YUY2 ! v4l2video0convert extra-controls="s,horizontal_flip=1,vertical_flip=1" ! video/x-raw,format=RGB16 ! videoconvert ! ximagesink (that forces capture buffer to be YUYV and output buffer to be RGB565 LE) that's the patch needed if formats are equal:
plus, v4l2video0convert needs this parameter: disable-passthrough=1 | [16:03] | |
hverkuil | I'll be using v4l2-ctl & qvidcap. | [16:11] | |
mchehab | (this tip was given to me by ndufresne) | [16:11] | |
hverkuil | and test with v4l2-compliance. | [16:12] | |
mchehab | it passes v4l2-compliance
(except for the lack of request API) it is now saying it fails by not implementing it fail: v4l2-test-buffers.cpp(1603): doioctl_fd(media_fd, MEDIA_IOC_REQUEST_ALLOC, &req_fd) (imo, a problem at the tool, as this shouldn't be mandatory) | [16:12] | |
hverkuil | Is the request API disabled in your .config? | [16:13] | |
mchehab | probably
$ grep REQ .config # CONFIG_MEDIA_CONTROLLER_REQUEST_API is not set yes | [16:13] | |
hverkuil | I'll test that tomorrow as well. | [16:14] | |
ndufresne | mchehab, you just made me realize, I should probably auto-disable passthrough if there is extra-controls | [16:15] | |
hverkuil | actually, I'll test that now since this should work. | [16:15] | |
mchehab | ndufresne: yes, I think so | [16:15] | |
ndufresne | ndufresne filing an issue
I simply didn't think about this use case before, but many csc and scaler have flipping support | [16:15] | |
mchehab | ndufresne: to be frank, I don't see much sense on enabling passthrough myself
I mean, if there's a m2m device at a pipeline, it should be doing something :-p | [16:16] | |
ndufresne | when you strictly use it as a converter / scaler, it make sense | [16:17] | |
mchehab | either cropping/scaling/format conversion/image change due to some control/... | [16:17] | |
ndufresne | as it saves on memory bandwidth | [16:17] | |
mchehab | yes, but if it a hardware converter, it is probably more efficient than a gst software implementation
it makes sense to be able to enable a passthrough mode | [16:19] | |
ndufresne | sure, but let's say you have a "generic" app that has no idea what a capture support
to make sure it works, you would program your pipeline with v4l2src ! v4l2convert ! kmssink If you can zero-copy directly from v4l2src to kmssink, the passthrough will same you on bandwidth | [16:19] | |
mchehab | I see | [16:20] | |
ndufresne | that's why an explicit control like disable-passthrough was required | [16:21] | |
mchehab | an explicit way to change it makes sense | [16:21] | |
ndufresne | maybe it should have been the opposite (enable-passthrough), but I can't change it anymore, it's released now | [16:21] | |
mchehab | (i would just do the reverse, e.g. use something like enable-passtrough)
yeah, changing this is problematic I see your point | [16:22] | |
ndufresne | it's always the first user use case that wins with these things | [16:22] | |
mchehab | what worries me is that, if you hadn't told me about that, I would have assumed that vim2m were ok
if fmt_in == fmt_out as I had this misconception, others might have the same | [16:23] | |
ndufresne | yes, I got tricked plenty of times too
gstreamer is application/use-case centric, it is of course good for a lot of testing (because it allow using combinations that apps may never use), but it's not dedicated for testing the kernel | [16:25] | |
mchehab | yes, I know | [16:26] | |
ndufresne | but for m2m devices, like the Exynos FIMC and GScaler, it's been a great tool to fix all the stride/width/height corner case | [16:27] | |
mchehab | I usually prefer using qv4l2 for testing, but it doesn't work properly for m2m | [16:27] | |
ndufresne | ndufresne also uses qv4l2 all the time | [16:27] | |
*** | benjiG has left | [16:27] | |
mchehab | hverkuil: btw, how do you test m2m with v4l2-ctl and qvidcap? | [16:28] | |
hverkuil | you can use v4l2-ctl and stream the captured frames to qvidcap with --stream-to-host | [16:28] | |
ndufresne | The shaders are a little buggy though for I420 and NV12 | [16:28] | |
hverkuil | and run qvidcap with the -p option | [16:28] | |
mchehab | will it use the same file handler on both apps? | [16:28] | |
hverkuil | v4l2-ctl uses the same filehandle, yes (qvidcap just receives the video frames over a socket, it doesn't use the video device) | [16:29] | |
mchehab | ah | [16:29] | |
ndufresne | hverkuil, the --help would win having some examples command I believe, it's not exactly obvious first read | [16:29] | |
mchehab | --help-all of v4l2-ctl is, IMHO, very hard to read | [16:30] | |
hverkuil | I never use --help-all | [16:30] | |
ndufresne | that one needs a pager, I alway | less -RS and do searches | [16:30] | |
mchehab | I never find anything without --help-all | [16:30] | |
hverkuil | I use --help, then select the --help-foo that I actually need.
--help-vidcap or --help-streaming for example | [16:30] | |
mchehab | in this specific case, it should be --help-m2m :-p | [16:31] | |
hverkuil | it could be a shorthand for --help-vidcap and --help-vidout | [16:31] | |
ndufresne | this one seems like an easy enhancement
hverkuil, so are you able to test codecs driver with qvidcap ? I didn't know about this tool to be honnest | [16:32] | |
hverkuil | yes, I use qvidcap for that. | [16:33] | |
ndufresne | if you haven't yet, sounds like something to blog about
ndufresne hope ezequielg know | [16:33] | |
mchehab | it would be interesting if qvidcap can test RGB565BE
I was unable to test this one with gst | [16:34] | |
hverkuil | I.e. you can use v4l2-ctl to decode a bitstream and stream the raw video to qvidcap over a socket. | [16:34] | |
ndufresne | so that's limited to mmap io mode then ? | [16:35] | |
hverkuil | qvidcap is basically just the opengl viewer part of qv4l2, plus a socket interface. | [16:35] | |
mchehab | hverkuil: how can I use v4l2-ctl to set capture format to YUY2 and output format to RGBR? | [16:36] | |
hverkuil | Yes, no zero copy at all. | [16:36] | |
ndufresne | ok, but we could add DMABuf passing over a socket (both ways), could be a nice project | [16:36] | |
mchehab | ah, -x | [16:36] | |
ndufresne | and then a much smaller code base when to reproduce issues found in bigger software like chrome | [16:37] | |
hverkuil | v4l2-ctl -v pixelformat=capturefourcc -x pixelformat=outputfourcc --stream-mmap --stream-out-mmap --stream-to-host localhost
and in a separate shell: qvidcap -p (hope I got this right, it's from memory) mchehab: pushed a v4l2-compliance fix for when the request API is disabled in the kernel config. | [16:37] | |
mchehab | ok, thanks
I suspect you need something like this too: --stream-from foo.raw (and use something like vivid to generate a foo.raw) something like (for vivid as /dev/video1) $ v4l2-ctl -d /dev/video1 --stream-count 100 --stream-mmap --stream-to foo.raw -v width=640,height=640,pixelformat=YUYV didn't work: $ v4l2-ctl -v width=640,height=480,pixelformat=YUYV -x pixelformat=RGBR --stream-from foo.raw --stream-mmap --stream-out-mmap --stream-to-host localhost --stream-to-host or --stream-from-host not supported for m2m devices | [16:40] | |
hverkuil: it didn't enable request API because it depends on STAGING_MEDIA
(I'm building it using media-build) recompiling with staging and request API enabled (with it disabled, v4l2-compliance passed) after your patch bbiab back fail: v4l2-test-buffers.cpp(1755): buf.qbuf(node) test Requests: FAIL (with request api enabled) Total for vim2m device /dev/video0: 45, Succeeded: 44, Failed: 1, Warnings: 0 | [16:52] | ||
........ (idle for 37mn) | |||
hverkuil | The remaining vim2m fail should go away once this PR is merged: https://patchwork.linuxtv.org/patch/54201/
(should have mentioned that I'm testing with this PR) | [17:35] | |
mchehab: that's probably not correct what I wrote above since you are testing without the Request API enabled.
No, I'm correct. I saw that for that test you enabled the Request API, and without the PR it will indeed fail. | [17:45] | ||
mchehab | yes, it is now enabled with request API
I was not able to test with v4l2-ctl, though (14:45:36) mchehab: --stream-to-host or --stream-from-host not supported for m2m devices (14:45:36) mchehab: $ v4l2-ctl -v width=640,height=480,pixelformat=YUYV -x pixelformat=RGBR --stream-from foo.raw --stream-mmap --stream-out-mmap --stream-to-host localhost (I suspect that your PR won't affect this) with your PR it passes at v4l2-compliance test Requests: OK Total for vim2m device /dev/video0: 45, Succeeded: 45, Failed: 0, Warnings: 0 | [17:50] | |
hverkuil | I suspect a recent change broke v4l2-ctl for m2m devices. Will check tomorrow. | [17:56] | |
ezequielg | ndufresne: hverkuil: i use gstreamer to test | [17:59] | |
ayaka | ok I finally make a mpv for v4l2 request test
but I look like the v4l2 device is not called | [18:00] | |
hverkuil | v4l2-ctl --stream-mmap --stream-out-mmap now works again (pushed the fix)
But I was mistaken about --stream-to-host: it appears I didn't add support for that for m2m devices. Not sure why not, I'll see if I can add that tomorrow. | [18:01] | |
mchehab | you can test with:
$ gst-launch-1.0 filesrc location=some_file.mp4 ! decodebin ! videoconvert ! video/x-raw,format=RGB ! v4l2video0convert disable-passthrough=1 extra-controls="s,horizontal_flip=1,vertical_flip=1" ! video/x-raw,format=RGB16 ! videoconvert ! ximagesink or $ gst-launch-1.0 videotestsrc ! video/x-raw,format=BGR ! v4l2video0convert disable-passthrough=1 extra-cont rols="s,horizontal_flip=1,vertical_flip=1" ! video/x-raw,format=YUY2 ! videoconvert ! ximagesink (you may remove the disable-passthrough - when formats are different, gst does the right thing) hverkuil: if you have time, it would be good if qv4l2 could also work with m2m right now, it seems that it handles like an output device or something (not a priority, but it would be good if it could, at least, warn that m2m is not supported) hverkuil: btw, I will likely apply my vim2m patches and your PR tomorrow, if nobody complains I'm tending to add a c/c stable as, on its current state, IMHO vim2m is broken (still it is a big patch... not 100% sure abou that) | [18:06] | |
....... (idle for 31mn) | |||
ezequielg | good to see the disable-passthrough was a good idea! | [18:41] | |
.......... (idle for 47mn) | |||
ayaka | ezequielg, you are using the ffmpeg to verify the rockchip's driver right?
but I found it doesn't support multiple planes so all you are only use the capture/output not capture_mplane/output_mplane? | [19:28] | |
...... (idle for 26mn) | |||
ezequielg | ayaka: use https://github.com/Kwiboo/FFmpeg/tree/v4l2-request-hwaccel maybe | [19:55] | |
ayaka | ezequielg, ok the same one I used
that version doesn't support multiplanes, I jsut kwiboo I just ask kwiboo | [19:55] | |
ezequielg | it doesn't?
it should support MPLANE because my driver is MPLANE and it's working :-) | [19:57] | |
hverkuil | mchehab: added support for --stream-to-host for m2m devices in v4l2-ctl
example: v4l2-ctl --stream-mmap --stream-out-mmap --stream-to-host localhost --stream-out-hor-speed 1 (note: without a --stream-from option v4l2-ctl will use the test pattern generator to generate an image) | [20:05] | |
ayaka | ezequielg, strange, it would set a NV12 not NV12M format which is hard coded
ezequielg, may you have a tried on "ffmpeg -hwaccel drm -hwaccel_device /dev/dri/card0 -v trace -i videos/19sintel_mpeg2.mpg -an -f null -" | [20:15] | |
ezequielg | ayaka: i am currently busy with other setup
but i will post the patch and hopefully some instructions to build mpv and ffmpeg | [20:19] | |
ayaka | ezequielg, it is ok, I would switch to v4l2 request test from bootlin
I want to get rid of userspace part and finish my verification as soon as poosible | [20:21] | |
........... (idle for 51mn) | |||
huangd | Hello, are there any plans to bring higher bit per channel formats into videodev2.h? We're planning to include 10-bit support on our decoder, and wanted to see if there are plans, or what the reception would be to patches along those lines. Otherwise is it more desirable for us to create new reserved image formats instead of public ones? | [21:13] | |
Kwiboo | ezequielg: the ffmpeg hwaccel supports basic mplane + NV12, just not NV12M that ayaka is using, I will work on adding mplane pixelformat support in hwaccel (hard to verify without a driver using mplane pixelformats) | [21:15] | |
ezequielg | ah. | [21:17] |
↑back Search ←Prev date Next date→ Show only urls | (Click on time to select a line by its url) |