↑back Search ←Prev date Next date→ Show only urls | (Click on time to select a line by its url) |
Who | What | When | |
---|---|---|---|
*** | lexano has quit IRC (Ping timeout: 245 seconds) | [00:04] | |
................................................................................................... (idle for 8h11mn) | |||
hverkuil | mjourdan: can you make a v2 of "[RFC PATCH 0/5] Add enum_fmt flag for coded formats with dynamic resolution switching"? | [08:15] | |
....... (idle for 32mn) | |||
svarbanov: ping | [08:47] | ||
svarbanov | hverkuil, pong | [08:47] | |
hverkuil | that was quick :-)
did you see the "[RFC] Stateful codecs and requirements for compressed formats" thread? I'm preparing a v2, and for the venus codec properties I have: "venus: also needed 1 buffer to contain exactly 1 frame and 1 frame to be fully contained inside 1 buffer. It used to have some specific requirements regarding SPS and PPS too, but I think that was fixed in the firmware." (reported by tfiga) Is that correct? this refers to the decoder. | [08:48] | |
svarbanov | it is correct except " It used to have some specific requirements regarding SPS and PPS too", for which I'm not sure it is fixed for all versions of hw/fw
i think this SPS/PPS requirement is: the first buffer of the stream should contain SPS/PPS + at least one frame | [08:51] | |
hverkuil | svarbanov: OK, so the SPS/PPS shouldn't be in a separate buffer without actual frame data.
at least for the first buffer, it is allowed for later buffers? | [08:55] | |
svarbanov | hverkuil, yes, this is valid for let say for v1 and v3 hw versions, but it is not a requirement for v4 | [08:56] | |
hverkuil | I assume the encoder produces one buffer (containing SPS/PPS + compressed frame data) per OUTPUT frame? | [08:58] | |
svarbanov | I guess SPS/PPS + frame data but only for the _first_ OUTPUT frame | [09:00] | |
hverkuil | svarbanov: are there differences in requirements between V4L2_PIX_FMT_H264, V4L2_PIX_FMT_H264_NO_SC and V4L2_PIX_FMT_H264_MVC?
requirements -> behavior | [09:04] | |
svarbanov | hverkuil: the firmware cannot work without start code so this is not supported
MVC Is also not supported | [09:05] | |
hverkuil | svarbanov: in that case, can you remove those from the venus driver?
I found them with a grep. | [09:06] | |
svarbanov | hverkuil: hmm, I cannot remember why I have them :) | [09:08] | |
......................... (idle for 2h2mn) | |||
*** | LazyGrizzly has left | [11:10] | |
...................... (idle for 1h49mn) | |||
kbingham | pinchartl, hverkuil, mchehab, I have a similar change to make across multiple drivers. (similar to the recently posted [PATCH] media: i2c: adv748x: Convert to new i2c device probe())
http://paste.ubuntu.com/p/D7sk466dPK/ shows the files to change within drivers/media ... Is there a preferred breakdown of patches for this ? Should I submit all as a single patch, or one patch per file, or perhaps one patch per vendor? (i.e. the ov* files could be a single patch )
| [12:59] | |
hverkuil | kbingham: I split it up into directories: so one patch for i2c, one for radio. And I add CCs to any specific driver maintainers (if any).
This assumes all the diffs for drivers are basically following the same pattern. If there are drivers that need more work (i.e. it is not a trivial patch), then I do those separately. | [13:04] | |
kbingham | hverkuil, all very similar and should be trivial | [13:07] | |
hverkuil | Then I suggest one patch per subdirectory. | [13:08] | |
........................... (idle for 2h14mn) | |||
*** | benjiG has left | [15:22] | |
.................................. (idle for 2h46mn) | |||
tonyk | hverkuil: when you says that output -> scaler -> capture is a mem2mem device, do you mean that it behave like a mem2mem device or it should implement the mem2mem API? | [18:08] | |
hverkuil | tonyk: it behaves like a mem2mem device, but it doesn't implement the m2m API. So it is two video devices, one for capture, one for output, and not combined into a single video device. | [18:10] | |
tonyk | thanks hverkuil. in the way it's implemented now, it's looks like vivid if loopback enable: with no frame from the output device, the capture will send noisy frames to userspace
I'll change to stall the streaming instead of sending noise | [18:13] | |
.... (idle for 18mn) | |||
hverkuil: make sense? | [18:31] | ||
........................ (idle for 1h57mn) | |||
hverkuil | tonyk: yes.
vivid is different: it emulates what happens if you connect e.g. an S-Video output to an S- Video input: the output isn't sending anything you get static on the video capture. It does the same for HDMI loopback which is a bit dubious: stalling would be a better emulation, since you don't get static for digital video. | [20:28] |
↑back Search ←Prev date Next date→ Show only urls | (Click on time to select a line by its url) |