hverkuil1, ping
stdint: pong
hverkuil1, after last time I sent a mail to talk about issue worked with VA-API and V4L2
I have got any replied yet. But I find the other issue, the buffer in the VA-API really not thinking mulit-planes in V4L2
I mean it never consider of the usage of multi-planes used in V4L2
so if we want the stateless VPU worked well in V4L2
With 'multi-plane' do you mean that each plane is in a separate buffer, or just that you have a luma and chroma plane in the same buffer (one after the other).
?
hverkuil1, I think the multi-planes doesn't permit the buffer address of the luma and chroma is contingues
Contiguous
s/permit/promise/
The multi-planar API is really a multi-buffer API: it's a superset of the normal API and it allows for cases where each plane is in a separate buffer.
so I think if the VA-API would be used as the glue library in userspace
But we have support for a lot of formats where both planes are in the same buffer:
there are huge modification would be requested here
https://hverkuil.home.xs4all.nl/spec/uapi/v4l/pixfmt-nv12.html
https://hverkuil.home.xs4all.nl/spec/uapi/v4l/pixfmt-nv16.html
but for the V4L2_PIX_FMT_NV12M
Do you *need* to use NV12M? Is that a hardware requirement?
If you can use NV12, then that should make life a lot easier.
hverkuil1, yes and not
hverkuil1, for the decoder, the NV12 would work with a align
it is also the correctly solution
for the encoder, it need the NV12M
there are two registers to set them
actually three
one for luma, one for cr, one for cb
in nv12m, the cr and cb registers would be set to the same value
So, if you get a buffer containing NV12 formatted data, can't you just set the luma to the start of the buffer and cb/cr to the start of the chroma plane?
Usually 'real' multiplanar formats are only needed if there are HW restrictions (luma has to go in one memory bank, chroma in another as is the case for the exynos4).
ok, then it may work for rockchip
but I am not sure the situation for the other vendor
'the other vendor'?
hverkuil1: the issue that you pinged me about, VP9 reference buffers in presence of resolution changes is something I was planning to discuss on ELCE as well
maybe those VPU used in other vendor would have such HW restrictions
posciak: OK, good to know. I'm available for both Monday and Tuesday during that week.
hverkuil1: this is great, it would be good to have 2 days
hverkuil1: what I'm thinking is that codec API needs to be finalized
and I have a write up
which I'd like to review in detail
and a few opens for things like the above issue
I agree. If you can prepare something beforehand, that would be great.
I will
and  this is separate from request API
and the slice/frame APIs
just codec API in general
which is long overdue for a better documentation
I actually wrote a spec up
after our previous discussions
Great!
posciak, could CC me and let me join the talk in it?
that part would be necessary to do me job
codec API will be discussed at ELCA
ELCE
I wouldn't be there sorry
posciak, there is still no time, could you send an invite to my supervious?
supervisor
I am not the organizer, sorry
there is still some time to that
posciak, I mean let us send somebody to join your discuss in ELCE
as I am quite interested in your plan
stdint: http://events.linuxfoundation.org/events/embedded-linux-conference-europe
hverkuil1, could I send you a private message
Note that this meeting is actually not part of the ELCE, so you don't need to register for the ELCE (although I recommend it, it's a good conference).
stdint: sure
hverkuil1, pinchartl: the biggest issues of the meta devnode patchset is that it provides no glue about how a driver and userspace should do with regards to the buffer size and to the buffer size changes...
and how the Kernel will do if someone allocates a buffer that it is not big enough to accept a size change via a control
pinchartl: in other words, what I'm saying is that *you* need to convince me that the approach you're proposing will work in a sane way. The current patchset doesn't do that ;)
mchehab: the current patch set indeed keeps it silent
I have to go now I'm afraid, I'll be back in a few hours
by the way, do you plan to merge Ricardo's HSV patches for v4.9 ?
ok
only if a upstream driver (not including vivid) uses it
I've posted a patch for the VSP driver to use the HSV format
and Ricardo has included it in his pull request
ok. So, I'll handle it.
I should likely review patches by the end of this week, or on the next Monday (I usually do that once per week)
posciak, about the codec API in general, would you defined a certain structure to defined parameters ?
mchehab: good to know. I'll see if I can go through my pending patches tomorrow or Thursday so you'll have pull requests from me by Friday.
ok, good
well, if you do it on Friday, I may still handle it, as you're on a different time fuse ;)
hverkuil1: there's an ABI warning on your yesterday's build
on v4l2_ycbcr_encoding
commit adca8c8e251fdbdb6b28e3ef7a5ca24b597b82d8
Author: Hans Verkuil <hans.verkuil@cisco.com>
Date:   Thu Aug 4 06:01:48 2016 -0300
    [media] videodev2.h: put V4L2_YCBCR_ENC_SYCC under #ifndef __KERNEL__
yeah, harmless. I added pxa support for the first time and then you get a false warning.
that seems to be the culprit
it does a diff with the previous ABI and warns if it is different. Since there was no original abi, it warns as well.
OK
I could fix the script, but this happens so rarely that I add a new arch that it is not worth the trouble.
yeah, we're not passing the enum directly to userspace, but as u32
so userspace should be safe
hverkuil1: hello, I'm writing a CEC driver which cec-compliance should I run ?
benjiG: always use the v4l-utils git repo.
That always has the latest and greatest.
hverkuil1: that what I'm doing (using cec-johan branch)
No, that's my own private branch.
Just use https://git.linuxtv.org/v4l-utils.git/log/
Everything from cec-johan has been merged there and all further development happens in v4l-utils.
ok
For which SoC?
STIH board
Note that there are quite a lot of cec fixes pending for kernel 4.8: https://git.linuxtv.org/media_tree.git/log/?h=fixes
You probably want to test with those fixes included in your kernel.
I will update my kernel on this branch and re-test my driver
BTW, if your hardware supports snooping mode (or monitoring mode, or whatever it is called), then I strongly recommend you implement that. It's great for debugging CEC issues.
no it doesn't support that
Also have the HDMI transmitter pass on the CEC physical address to the CEC driver when it reads it from the EDID. You don't want to rely on userspace for that.
until now I rely on userspace for that
I don't know how get cec info from my DRM/KMS driver (any helpers for that ? )
I do: "cec-ctl --tuner -p 1.0.0.0"
and then "cec-compliance --test-tuner-control -v"
Russell King has been working on something: https://www.spinics.net/lists/arm-kernel/msg523556.html
But there are some issues: https://patchwork.kernel.org/patch/9277057/
I haven't seen a newer version from Russell.
nothing about that in -next
so I guess I will continue to rely on userspace
The driver will have to be in staging (the CEC framework itself is still in staging as well), and I think this should be fixed before it can get out of staging.
Hopefully we'll have a framework like Russell proposes in the kernel by then.
yes I have wrote it in drivers/staging/media/st-cec directory
hverkuil1: I have rebase my patches on top of media_tree and using latest v4l-utils
cec-ctl -S give me the topology
and cec-compliance --test-tuner-control report is OK
any others test to do before upstream my driver ?
cec-compliance -A
  -A, --test-adapter                  Test the CEC adapter API
got 3 failed with -A ...
What are the failures
fail: cec-test-adapter.cpp(179): check_0(laddrs.features[i] + 4, 8)
	CEC_ADAP_G/S_LOG_ADDRS: FAIL
		fail: cec-test-adapter.cpp(311): msg.len != 5
	CEC_TRANSMIT: FAIL
		fail: cec-test-adapter.cpp(436): msg.tx_ts || msg.tx_status || msg.tx_arb_lost_cnt || msg.tx_nack_cnt || msg.tx_low_drive_cnt || msg.tx_error_cnt
	CEC_RECEIVE: FAIL
	CEC_TRANSMIT/RECEIVE (non-blocking): OK (Presumed)
		fail: cec-test-adapter.cpp(832): m != mode
	CEC_G/S_MODE: FAIL
Do you have the CEC fixes in your kernel?
I'm on top of media_tree/master
ha I will switch on media_tree/fix
Is the adapter connected to a actual CEC device?
yes to a TV
The first fail suggests that you don't have commit 292eaf50 (cec: fix off-by-one memset), but it's in the master branch.
Can you check that you have that patch?
I'm rebuilding my kernel with media_tree/fixes branch
give me 2 minute to run cec-compliance again this
mchehab: how should I try to convince you that using controls for the metadata API would be a good idea ? I can mention in the documentation that the format is selected through controls, but would that be enough ?
benjiG: are you aware that the vivid driver emulates two CEC as well?
Each HDMI input/output will have a CEC device.
pinchartl: it is not just that: you need to define how errors will be handled if the buffer is too small after being changed by a control
Running cec-compliance -A with vivid works fine (with the master tree, no fixes applies)
yes: I have watch  your talk at ELC on youtube :-)
mchehab: that's pretty simple. S_CTRL will fail in that case
and return an error
pinchartl: what error? how userspace will know what will the the needed size?
my laptop battery is almost empty
I'll be back shortly
hverkuil1: got the same issues with media_tree/fixes branch
benjiG: test with vivid: modprobe vivid; cec-ctl --tv; cec-ctl -d1 --playback; cec-compliance -A
 hverkuil1: cec-compliance -A is OK on my setup with vivid
hverkuil1: my TV only support CEC 1.4 could that impact the compliance test ?
It shouldn't.
Be aware that that doesn't mean there are no issues :-)
It's all pretty new, and I won't claim to have tested all combinations.
sure :-)
Hi! Quick question: In the struct v4l2_pix_format, the field sizeimage should be format->width * format->height * bytes_per_pixel? Or it should be format->bytesperline * format->height ?
I understand that format->bytesperline can be greater then format->width * bytes_per_pixel, but should sizeimage be the actual size of the image? Or the size of the buffer?
format->bytesperline * format->height
hverkuil1, thanks
sizeimage should be the actual size of the buffer, which may be > format->bytesperline * format->height as well if there are special alignment requirements.
hverkuil1, I see, thank you, make sense :)
mchehab: regarding S_CTRL
first question, what error code ?
we have several options
-EINVAL and -EBUSY are commonly used
-EBUSY is used today for all ioctls that try to change parameters that can't be changed while streaming
-EINVAL is used by the exynos4-is driver when changing rotation while streaming if the result wouldn't be acceptable
we can also use another error code
-ENOBUFS for instance
which is documented as "No buffer space available"
or -ENOSPC in a similar spirit
second question, how will userspace know what size to allocate ?
let's not forget that we're dealind with device-specific format
userspace applications are thus not device-agnostic
they know how to deal with the metadata
pinchartl: the hole sense of having a public API is to provide device-agnostic way to control devices
that's why there are lots of VIDIOC_ENUM_* and VIDIOC_*_CAPS ioctls
and that's Hans' point: device-specific features should be accessed through controls
whatever we do should keep providing ways for applications to be aware about what the device provides
we're not talking about standard formats but about device-specific formats
well, we could use controls instead of adding a meta device
but if we're adding it, let's do it right
hverkuil1: are you there ?
it's easier if we can all talk together, me passing messages between you and Hans in the same channel is a bit pointless :-)
regardless of how we do it, applications will need to know about the device to use the API
as the content of the buffer will be device-specific
pinchartl: you're missing the point: what I need is a consistent proposal that won't look like a hack for an specific device
there's no way an application could use the statistics data without knowing about the device
is your concern that a generic application wouldn't be able to use the device ?
pinchartl: one seeing all those proprietary webcam driver-specific format might think the same
but we did find a way to handle those via libv4l
using device-specific code in libv4l, yes
there's nothing that would prevent us doing something similar here
provided that kernelspace provides enough information, a library can do what's needed
the device-specific code would know about buffer sizes
but the kernelspace needs to provide such info
what information would be missing here for a library to handle this ?
(in the case of those webcams, one different fourcc for each format, and a consistent way to report the buffer size)
the information that it is missing is the buffer size
- the piece of information is provided through G_FMT for the currently selected mode
even a simple app that would read from a meta device and write a binary blob will not work if it can't manage to get the buffer size right
- and it's known to any device-specific userspace code
so device-specific code in libv4l knows the buffer size
don't forget that what we're talking about here is the following use case
yes, for normal devices, ENUM_FMT, TRY_FMT, S_FMT and G_FMT provides that
1. controls are set to initial values, selecting a mode
2. G_FMT is used to retrieve the buffer size
if you switch it to a control, you lose those
so, you need to provide some other mechanism
3. buffers are allocated
4. userspace starts the stream
5. userspace wants to modify the mode controls, requiring a larger buffer size
without 5. there's no issue
G_FMT reports the size
there's a datasize field in the metadata format structure
it works the same way sizeimage does for images
so the size *is* reported by the kernel
for the currently selected format/mode
the same way we do it today for images
the problem is with (5)
users are allowed to change a control anytime
no they're not
but a control doesn't return anything, except by an error code
we already disallow changing controls in many cases
returning an error code when a control can't be changed is fine
then, your proposal is that those controls will be enabled only when the video stream is not started?
those controls can be changed
- when the stream is off
- when the stream is on, if queued buffers are large enough
but let's not forget one thing
no, they can be changed only if CREATE_BUF or REQBUFS is not called
after that, it is too late, and they should be disabled
changing the mode at runtime requires synchronizing the mode with buffers
(from what I understood that you want)
to do so we need the request API
which will bundle controls, formats and buffers in a request
and that will be validated in one go
without the request API changing the mode is pointless
you will get buffers with the new mode at some point
but you won't know when
so there's no way you could interpret the content of the buffer
I'm fine disallowing changes to those controls when the stream is on
exactly the same way we would disallow changes to the format when the stream is on if the mode was selected through the format
and only supporting changing the mode at runtime with the request API
I'm not seeing how the request API will handle buffer size increase needs
the request API is about synchronizing changes to parameters (regardless of whether they're controls or formats) with buffers
it doesn't add a mechanism to convey size information to userspace
or rather not a new mechanism
so, the problem, even with request API remains
the request API will make it possible, though, to try a request, like we try controls or formats
so you will be able to try the combination of a format and mode controls, and read back the size from the format structure
trying a bundle of controls + formats is something that is not possible today, even with the video API
it's not a new issue with metadata
so, your proposal is to use TRY_FMT? how this will work with a control?
we don't have a TRY_CTRL today
no, not TRY_FMT
TRY_REQUEST (how whatever we will call it)
s/how/or/
note that TRY_REQUEST would only be needed for the specific use case that we're talking about
that require synchronization between buffers and controls/formats, and thus can *not* be supported without the request API
all other use cases that don't require changing the mode while streaming will work with our existing APIs
using G_FMT to retrieve the buffer size
provided that we enforce an specific order of calls for those controls that may affect the format
yes, controls should be set first, and G_FMT called then
e. g. being called before REQBUFS/CREATE_BUFS
the same way V4L2_CID_ROTATE needs to be handled today
or after releasing the buffers
fortunately, only 2 SoCs use V4L2_CID_ROTATE...
yes, 2 SoCs and 3 drivers
we should likely rename its ioctl to V4L2_CID_ROTATE_DEPRECATED and add a new one with a proper way to handle buffer size changes
how would you do that ? :-)
properly documented
we did that already in the past
with some ioctls that were defined like crap
no, I mean how would the new way look like ?
if you have a good proposal that could be applied to statistics modes I wouldn't be against it :-)
first of all not returning -EINVAL if the buffer doesn't fit ;)
and properly documenting at S_FMT and at the ioctl description what happens when the buffer doesn't fit
yes, it should be properly documented of course
we can't change the behavior  to the existing control, but we should do that for newer drivers that may want such functionality...
but the question is, how would it look like ?
would it be a control ?
or something else ?
and how would it interact with formats ?
to be frank, I would prefer to have a S_FMT kind of ioctl to handle such cases, even if it has inside a list of controls
we don't even have to rename the existing control to V4L2_CID_ROTATE_DEPRECATED, we could just document how it should be used from now on
are you thinking about a single ioctl with a list of controls and format(s) ?
well, we probably can't even change the return code without risking to break uAPI
this is just a brainstorming...
I think we can change the return values for new drivers. there's no uapi for new drivers as by definition nothing is using them :-)
yes, just brainstorming
so are you thinking about a single ioctl with a list of controls and format(s) ?
but we could have something that would be similar to a S_FMT+EXT_CTRL (for the specific controls that change format)
I really like that idea
that's how I was thinking about designing the request API
but the current implementation is based on the existing S_FMT and S_EXT_CTRLS ioctls
as Hans would like to stay compatible with those
but I'd prefer a single ioctl that would effectively be a combination of S_FMT+S_EXT_CTRLS+QBUF
(and probably the same for subdevs)
it would be much easier
but back to the point
regardless of whether we use controls or formats to configure the statistics mode
the request API will support that, and will be needed for the use case of modifications to the mode at runtime
for all other use cases, both S_CTRL (with an explicit order of S_CTRL, G_FMT, REQBUFS) and S_FMT would work
I have a small preference for S_CTRL. I haven't tried to implement the same with S_FMT, but thinking about how it would look like doesn't really convince me for now
I could come up with a prototype, but I'd want to brainstorm it first, and I'd want Hans' approval. he currently prefers S_CTRL for this purpose
hverkuil1: still not there ? :-)
the thing with S_CTRL is that it can return just an error code (and eventuall an updated value for the control)
while, for things that control the format, we also need the buffer size (and some additional meta-data, like line size, for /dev/video devnodes)
well, I need to do something else right now. We should try to brainstorm it with hverkuil1 and sailus too
but we return the buffer size through G_FMT. are you OK with the S_CTRL -> G_FMT -> REQBUFS use case (that don't allow changing the control while streaming) ?
yes we should
would you have time for that tomorrow or on Friday when Hans and Sakari will hopefully be there ?
yeah, I have time tomorrow
I'm actually planning to review patches on Friday
so, the best would be if we could do it tomorrow or on Thru
sorry I meant tomorrow or Thursday
works for me, let's see what Hans and Sakari will say
thanks for the discussion
anytime
hi guys
I was wondering if I can repurpose my webcam as a USB->I2C bridge
I have a I2C sensor would like to solder that on to the USB webcam's I2C interface
K4rolis, what was the used driver?
the driver that's loaded for the webcam is uvcvideo
weiman: UVC cameras don't expose the sensor to the host
you would have to develop a firmware replacement for the webcam
additionally, note that I2C is only used to control the sensor. data is transmitted separately, likely on a parallel bus
yes, I want to have acces to the I2C interface
by I2C sensor, do you mean a video sensor ?
I have an i2c sensor that I would like to read out by abusing the webcam chipset
no
a fancy thermal imaging sensor :)
ah ok
but still an imaging sensor ?
what kind of data interface does it have ?
I2c :)
its 4x16 pixels... low data rate
ok
you could possibly do that if you replaced the webcam firmware
that won't be straightforward
hmmm I'm not really keen on doing that..
given that I'm not aware of webcam USB bridges with public documentation
then I'm afraid it won't be an option
some old webcams expose the I2C interface to the host
by old we're talking about 10+ years
oh nose...
I got on of those cheap webcams...
*one
so just to clarify - UVC is a standardized interface for the webcams provided by the controller chip which abstracts all the I2C configuration away itself?
K4rolis: correct
oh man, that is too bad :(
reading some of the v4l documentation, I had the impression that I just had to add my sensor as another v4l2_subdev
that's the case on devices that expose the I2C interface to the host
nowadays that's mostly PCI devices, as well as embedded devices
and some non-webcam USB video devices
I'd recommend getting a low-cost ARM board, that would be much easier
https://getchip.com/pages/chip costs $9 :-)
the USB connection is limited to usb-serial though
we're kind of interested in having webcam video stream together with the thermal sensor from one device, that would be our end goal
the controller chips having I2C interface to configure the sensor had our hopes up :)
the webcam USB bridges usually take video data from a dedicated video sensor parallel interface
do you want your device to appear as a USB camera ?
no
my current prototype setup is, where I use a bus pirate to readout my ir-sensor
ideally what I thought could happen (keep in mind that I'm no kernel dev) is to have a separate /dev/thermal0 or something like that which we could then use, the point would be to multiplex the image data + the sensor data through a single USB device
and I use python to super impose to a webcam feed
and then do the super-imposing, as weiman mentioned, as a post-processing at the application level rather than driver level
so, instead of telling me how you have thought about implementing this, could you tell me about what you actually need in term of features and functionality ? :-)
give us a second, we can give you our final application vision :)
http://imgur.com/a/kzvXe
the 4x16, come from the ir-sensor, I2C device
ok, and ?
the idea it to sort of have a budget-friendly thermal imaging camera
I just need an I2C interface :)
are you looking to build a device that streams such a video stream ?
what we have now is quite an impractical prototype which uses a webcam as well as an additional device, what we would like to have is a single device that streams the video as well as the thermal data. we do not need to superimpose the thermal data onto the image at the controller level, we can easily do that in post-processing like it is now
we thought that if we had an I2C exposed to the host, we could somehow multiplex the video data together with the sensor data
so you would like a single device, pluggable to any computer, that would give you both a normal video stream and thermal data ?
to any linux computer that would have our modified driver :)
ok, I get it now
we're quite comfortable physically hacking the hardware
hacking an existing webcam for that would be pretty hard
as you would need not just to replace the firmware, but to actually modify it as you want to retain the webcam function
it would be nice if you could start from an open-source USB camera
but the only one I'm aware of is https://en.wikipedia.org/wiki/Elphel and it would be quite overkill :-)
haha, there goes the "budget friendly" part of project
another option would be to use a small ARM-based system with a video sensor
yup, I think that's our most viable option
shame that cheap hardware is not hacker-friendly :)
pinchartl: just out of curiosity - has there been any project to you knowledge that tried hacking the firmware of existing webcam controllers?
K4rolis: I'm afraid not
that's what I guessed, too much effort for little return :)
thanks for your insight, pinchartl!
we shall pursue less elegant ways of accomplishing this
there might be an elegant way I can't think of right now, but at least hacking a USB webcam seems unpractical at best