ezequielg: ping hverkuil: pong sorry for the late pong :-) hverkuil: oh, thanks for the feedback. late start today. ezequielg: I'd forgotten about my ping :-) Quick question regarding the rockchip patches: are the jpeg codec and the mpeg codec using the same video devices? i.e. the decoder (say) uses videoX and the output format enumeration gives both jpeg and mpeg formats? no. as the cover letter shows in the dot graph. encoding is e.g. videoX, decoding is e.g. videoY it's a major pain to implement (from the codec API point of view) otherwise. and a major pain for users. in other words, i'd really like to have video devices with only one encoded pad/side That's not what I mean. Given the video node for decoding (videoY in your example): can that be used for both JPEG decoding and MPEG decoding, depending on the selected output format? Or are the JPEG and MPEG decoders handled in separate video nodes? ah yes, a given video node handles multiple video formats for decoding. so, yes, it can be used for e.g. MPEG-2 decoding, and whatever other codec is supported by that VPU engine. hverkuil: note that rockchip has two different engines, one is called VPU and the other is called VDEC. both will be supported by the driver. OK. This may cause problems for applications since JPEG is quite different from MPEG (frame-by-frame decoding vs a bitstream) and this may be hard to detect in a generic way. Indeed. So, what I meant is: it will support multiple stateless decoders. I'm not aware of any codec drivers today that support both in a single video node. JPEG codecs are normally standalone devices. I don't have plans to support JPEG decoding tbh. but even if we do, I guess we can route that via its own video node. as long as it goes to a single m2m, to serialize the job in a single engine, things will work Ah, I see. The current rockchip driver supports a JPEG encoder and your patch series adds an MPEG decoder. Each has its own video node. Right? yup And if an MPEG encoder would be added as well, then it is probably a good idea to give that its own video node and not to mix it with the JPEG encoder. Ditto when adding support for an JPEG decoder. even if we have a single video node per codec, I don't think that affects anything. It should be fine to support h264/mpeg/etc in one video node, but adding jpeg to that mix is not a good idea. Just something to keep in mind for the future. unless, we add stateless JPEG decoding :-) with the controls we discussed a while ago. although, I am not sure I will open that can. hverkuil: if possible, please review the media controller changes first on that patchset. because you will dislike that the most :) hverkuil, in your review, you seem to emphasis that using the M variant of the pixel format implies that the memory planes are non-contiguous hverkuil, I personally don't think the usage of the M plane variant should be bound to how many alloc you have as this API can be use to represent single allocatation plane with arbitrary padding between planes ndufresne: that's not the case today: the M variants all assume non-contiguous planes, each allocated separately. that is sad, does that mean yet a third API for the case I describe ? we really need something cleaner when comes time to import external buffer, it's such a pain for userspace, I spent several weeks on implementing this in gstreamer, we we still have case where I forgot to validate something We have plans to make something cleaner, yes, and yes, that will be a third unified API. will be be able to handle multiple formats in a single queue ? I agree completely: we messed up at the time, we should have bitten the bullet at the time and made new versions of the streaming I/O ioctls that can cope with such things. sorry, not format, but frame size ? That we can already with CREATE_BUFS. well, not really I don't think it's feasible to use the current API or at least, I don't see how and even if we could, you keep increasing the number of buffers, which is not what you want let's say you are encoding We're missing DELETE_BUF, I know. And CREATE_BUFS isn't perfect either. and you have a resolution change that you want to happen has gapless as possible, you want to simply start queuing/importing buffers of the new size And then when the encoder reach that new size, something should happen which may implicate the userspace, but you want the buffers to have been wrapped by the kernel already Erm, let's get the codec API settled first, shall we? There is so much that can be improved, but one thing at a time. but ok, I'm clearly speculating here, I'll check the propose API when time comes hverkuil, btw, you should dig up why an multi-format codec can't have jpeg, it makes very little sense to me if you look at amlogic driver, it's multi-codec, and each codec has pixel format particularity, and it's surprisingly cleanly handled so far, it's separate for the only reason that all integrated HW have seperate (unrelated) IP and usespace isn't affected by that, because it first decide which formats it will use the node for, and then use the associate ioctl/ctrl subset for that format A JPEG codec behaves differently from a bitstream codec (esp. a stateless bitstream codec). While it is certainly possible to support them all in one video device, I am not convinced that it is a good idea. When we get a driver like that, I would have to take a good look at it. well, it is not necessarily stateless, and jpeg is a bitstream that you can parse from random location, so I don't understand why you say it's so different I'm pretty sure none of the JPEG codecs today can start at a random point. AFAIK they all expect a normal JPEG 'picture' to decode. same apply for all our H264 decoder, I only meant to describe the bitstream Anyway, we'll see how it will work out. It hasn't crystallized for me either, I just have a gut feeling that there may be problems if you attempt to mix them in a single video node. following the codec API, userspace first sets the encoded format, and then things just flow from there. As long as you have one encoded side (i.e. the node is a decoder only or encoder only), I believe it should be OK. but that might be naive. we still have to merge h264, hevc, vp8 and vp9 controls... so there is a lot to talk about. and we haven't discussed stateless encoders. ezequielg, agreed ezequielg, I thought you wanted to rename v4l2_m2m_buf_copy_data, but I see you just use it as-is, did I missed something ? hm they are two different patchsets. I expect the rename to be rejected or accepted sooner than the mpeg-2. sorry if this is confusing, I am actually doing lots of changes and it's not easy to keep track of them all. hverkuil: I just submitted a v2 for the pixel format helpers alone. Note that since we now have the helpers in v4l2-common.c, a small change is needed to rockchip_vpu_enc.c to avoid a collision.