On 20.03.2009 13:11, Hawes, Mark wrote:
Since upgrading to VDR-1.7.4 I have experienced some instability when recording to an NTFS partition.
The recording itself seems to proceed OK, but the recorded programs
will
generally only play for a few seconds before freezing. Pausing live video and then resuming play will also trigger the problem.
Subsequently
rewinding a recording that has been so paused will only rewind to the pause point, not the beginning of the recording.
All works well when using vdr-1.7.2 to record on the same system to
the
same partition, and when I placed my recording directory on a reiserfs partition all worked well too. So it appears to be
a problem recording on NTFS partitions introduced with the switch to
.ts
file recording.
Does it make a difference if you comment out the line
#define USE_FADVISE
in VDR/tools.c?
Klaus
Hi Klaus,
Commented out that line at line no 1442 in tools.c, and recompiled.
No better.
If I pause live video then restart it starts in slo-mo i.e. judders, and then eventually freezes, but may then start again, still juddering .... and freezing ...
Mark
________________________________
This is an email from Fujitsu Australia Limited, ABN 19 001 011 427. It is confidential to the ordinary user of the email address to which it was addressed and may contain copyright and/or legally privileged information. No one else may read, print, store, copy or forward all or any of it or its attachments. If you receive this email in error, please return to sender. Thank you.
If you do not wish to receive commercial email messages from Fujitsu Australia Limited, please email unsubscribe@au.fujitsu.com
On 21.03.2009 00:51, Hawes, Mark wrote:
On 20.03.2009 13:11, Hawes, Mark wrote:
/ Since upgrading to VDR-1.7.4 I have experienced some instability when/
/ recording to an NTFS partition./
/ /
/ /
/ /
/ The recording itself seems to proceed OK, but the recorded programs will/
/ generally only play for a few seconds before freezing. Pausing live/
/ video and then resuming play will also trigger the problem. Subsequently/
/ rewinding a recording that has been so paused will only rewind to the/
/ pause point, not the beginning of the recording./
/ /
/ /
/ /
/ All works well when using vdr-1.7.2 to record on the same system to the/
/ same partition, and when I placed my recording directory on a reiserfs/
/ partition all worked well too. So it appears to be/
/ /
/ a problem recording on NTFS partitions introduced with the switch to .ts/
/ file recording./
Does it make a difference if you comment out the line
#define USE_FADVISE
in VDR/tools.c?
Klaus
Hi Klaus,
Commented out that line at line no 1442 in tools.c, and recompiled.
No better.
If I pause live video then restart it starts in slo-mo i.e. judders, and then eventually freezes, but may then start again, still juddering …. and freezing …
Well, then I have no idea why it wouldn't work with an NTFS partition, while it does work with others.
Klaus
If I pause live video then restart it starts in slo-mo i.e. judders, and then eventually freezes, but may then start again, still juddering …. and freezing …
Well, then I have no idea why it wouldn't work with an NTFS partition, while it does work with others.
Klaus
Hi Klaus,
I have similar problems using a NFS share under some circumstances:
The general problem is that the index file stops growing after a few seconds, while the 00001.ts file growing normaly.
1. Using a local HDD with ReiserFS for recording, everything works well.
2. Using NFS V3 with the mount options tcp,hard,sync the index file stops growing.
3. Using NFS V3 with the mount options tcp,soft,async the index file keeps growing. I was happy figured that out, but there seems to be another problem.
While having heavy network load this problem occurs again.
An example:
Cutting a movie on my NFS share, while recording a new movie to the same NFS share causes in the same problem, the index file stops growing. And... it does not start growing again, after the cutting has finished.
This means that I have to stop/start the recording to get a correct index file again.
Could it be possible, that the process which is writing the index dies in some kind?
I've never had a problem like this in older VDR versions, and I often do cutting and recording at the same time.
BTW: Is the a way to (re)create a new index file for TS like genvdr for VDR file did?
-Günter
-- This message was scanned by ESVA and is believed to be clean.
On 22.03.2009 01:42, Niedermeier Günter wrote:
If I pause live video then restart it starts in slo-mo i.e. judders, and then eventually freezes, but may then start again, still juddering …. and freezing …
Well, then I have no idea why it wouldn't work with an NTFS partition, while it does work with others.
Klaus
Hi Klaus,
I have similar problems using a NFS share under some circumstances:
The general problem is that the index file stops growing after a few seconds, while the 00001.ts file growing normaly.
Using a local HDD with ReiserFS for recording, everything works well.
Using NFS V3 with the mount options tcp,hard,sync the index file
stops growing.
- Using NFS V3 with the mount options tcp,soft,async the index file
keeps growing. I was happy figured that out, but there seems to be another problem.
While having heavy network load this problem occurs again.
An example:
Cutting a movie on my NFS share, while recording a new movie to the same NFS share causes in the same problem, the index file stops growing. And... it does not start growing again, after the cutting has finished.
This means that I have to stop/start the recording to get a correct index file again.
Could it be possible, that the process which is writing the index dies in some kind?
The thread that writes the index file is the same one that writes the actual recording.
I've never had a problem like this in older VDR versions, and I often do cutting and recording at the same time.
Strange - the code for actually writing the index file hasn't changed between PES and TS recording. But maybe you can take a closer look at the changes to cIndexFile between version 1.7.2 and 1.7.3.
BTW: Is the a way to (re)create a new index file for TS like genvdr for VDR file did?
I believe you mean 'genindex'. Maybe I should make VDR automatically generate an index file when replaying a TS recording without one...
Klaus
I've never had a problem like this in older VDR versions, and I often do cutting and recording at the same time.
Strange - the code for actually writing the index file hasn't changed between PES and TS recording. But maybe you can take a closer look at the changes to cIndexFile between version 1.7.2 and 1.7.3.
The last VDR I used was 1.4.7
This is my first step with the new devel version 1.7.x
I can't say when the problem occured first, but I try to figure it out, including older versions.
Unfortunately I'm not experienced in C/C++, but I will give it a try.
BTW: Is the a way to (re)create a new index file for TS like genvdr for VDR file did?
I believe you mean 'genindex'.
Yes, of course, I meant "genindex" - it was early in the morning.
Maybe I should make VDR automatically generate an index file when replaying a TS recording without one...
This sounds good, very good! And it would solve our problem here in some kind.
I will come back here as far as I have news about.
-Günter
-- This message was scanned by ESVA and is believed to be clean.
Klaus Schmidinger Klaus.Schmidinger@cadsoft.de wrote:
I believe you mean 'genindex'. Maybe I should make VDR automatically generate an index file when replaying a TS recording without one...
good idea
stefan
Strange - the code for actually writing the index file hasn't changed between PES and TS recording. But maybe you can take a closer look at the changes to cIndexFile between version 1.7.2 and 1.7.3.
Possibly you may want to have a look to my logfile, while the error occurs:
----snip---------------------------------------------------------------- Mar 22 12:25:06 vdr-142 vdr: [23234] record /video0/@Wintersport/2009-03-22.12.25.2-0.rec Mar 22 12:25:06 vdr-142 vdr: [23234] creating directory /video0/@Wintersport Mar 22 12:25:06 vdr-142 vdr: [23234] creating directory /video0/@Wintersport/2009-03-22.12.25.2-0.rec Mar 22 12:25:06 vdr-142 vdr: [23234] recording to '/video0/@Wintersport/2009-03-22.12.25.2-0.rec/00001.ts' Mar 22 12:25:06 vdr-142 vdr: [2257] recording thread started (pid=23234, tid=2257) Mar 22 12:25:06 vdr-142 vdr: [2258] receiver on device 2 thread started (pid=23234, tid=2258) Mar 22 12:25:06 vdr-142 vdr: [2259] TS buffer on device 2 thread started (pid=23234, tid=2259) Mar 22 12:25:08 vdr-142 vdr: [23234] replay /video0/@Wintersport/2009-03-22.12.25.2-0.rec Mar 22 12:25:08 vdr-142 vdr: [23234] playing '/video0/@Wintersport/2009-03-22.12.25.2-0.rec/00001.ts' Mar 22 12:25:09 vdr-142 vdr: [2260] dvbplayer thread started (pid=23234, tid=2260) Mar 22 12:25:09 vdr-142 vdr: [2261] non blocking file reader thread started (pid=23234, tid=2261) Mar 22 12:25:11 vdr-142 vdr: [23234] timer 1 (2 1225-1525 '@Wintersport') set to event Son 22.03.2009 13:05-15:00 (VPS: 22.03 10:15) 'Wintersport' Mar 22 12:25:14 vdr-142 vdr: [2258] buffer usage: 70% (tid=2257) Mar 22 12:25:14 vdr-142 vdr: [2258] buffer usage: 60% (tid=2257) Mar 22 12:25:14 vdr-142 vdr: [2258] buffer usage: 70% (tid=2257) Mar 22 12:25:15 vdr-142 vdr: [2258] buffer usage: 80% (tid=2257) Mar 22 12:25:16 vdr-142 vdr: [2258] buffer usage: 90% (tid=2257) Mar 22 12:25:17 vdr-142 vdr: [2258] buffer usage: 100% (tid=2257) Mar 22 12:25:17 vdr-142 vdr: [2258] ERROR: 1 ring buffer overflow (65 bytes dropped) Mar 22 12:25:23 vdr-142 vdr: [2258] ERROR: 19932 ring buffer overflows (3747216 bytes dropped) Mar 22 12:25:29 vdr-142 vdr: [2258] ERROR: 21852 ring buffer overflows (4108176 bytes dropped) Mar 22 12:25:35 vdr-142 vdr: [2258] ERROR: 19089 ring buffer overflows (3588732 bytes dropped) ----snap----------------------------------------------------------------
BTW: I just tried to replay this recording using vlc, and the TS file is also corrupted. Not only the index file stops, the ts file is also defect.
Important: My NFS share has enough bandwidth to record three or more streams at the same time. No problem at all with older vdr's using the same share.
-Günter
-- This message was scanned by ESVA and is believed to be clean.
On 22.03.2009 12:39, Niedermeier Günter wrote:
Strange - the code for actually writing the index file hasn't changed between PES and TS recording. But maybe you can take a closer look at the changes to cIndexFile between version 1.7.2 and 1.7.3.
Possibly you may want to have a look to my logfile, while the error occurs:
----snip---------------------------------------------------------------- Mar 22 12:25:06 vdr-142 vdr: [23234] record /video0/@Wintersport/2009-03-22.12.25.2-0.rec Mar 22 12:25:06 vdr-142 vdr: [23234] creating directory /video0/@Wintersport Mar 22 12:25:06 vdr-142 vdr: [23234] creating directory /video0/@Wintersport/2009-03-22.12.25.2-0.rec Mar 22 12:25:06 vdr-142 vdr: [23234] recording to '/video0/@Wintersport/2009-03-22.12.25.2-0.rec/00001.ts' Mar 22 12:25:06 vdr-142 vdr: [2257] recording thread started (pid=23234, tid=2257) Mar 22 12:25:06 vdr-142 vdr: [2258] receiver on device 2 thread started (pid=23234, tid=2258) Mar 22 12:25:06 vdr-142 vdr: [2259] TS buffer on device 2 thread started (pid=23234, tid=2259) Mar 22 12:25:08 vdr-142 vdr: [23234] replay /video0/@Wintersport/2009-03-22.12.25.2-0.rec Mar 22 12:25:08 vdr-142 vdr: [23234] playing '/video0/@Wintersport/2009-03-22.12.25.2-0.rec/00001.ts' Mar 22 12:25:09 vdr-142 vdr: [2260] dvbplayer thread started (pid=23234, tid=2260) Mar 22 12:25:09 vdr-142 vdr: [2261] non blocking file reader thread started (pid=23234, tid=2261) Mar 22 12:25:11 vdr-142 vdr: [23234] timer 1 (2 1225-1525 '@Wintersport') set to event Son 22.03.2009 13:05-15:00 (VPS: 22.03 10:15) 'Wintersport' Mar 22 12:25:14 vdr-142 vdr: [2258] buffer usage: 70% (tid=2257) Mar 22 12:25:14 vdr-142 vdr: [2258] buffer usage: 60% (tid=2257) Mar 22 12:25:14 vdr-142 vdr: [2258] buffer usage: 70% (tid=2257) Mar 22 12:25:15 vdr-142 vdr: [2258] buffer usage: 80% (tid=2257) Mar 22 12:25:16 vdr-142 vdr: [2258] buffer usage: 90% (tid=2257) Mar 22 12:25:17 vdr-142 vdr: [2258] buffer usage: 100% (tid=2257) Mar 22 12:25:17 vdr-142 vdr: [2258] ERROR: 1 ring buffer overflow (65 bytes dropped) Mar 22 12:25:23 vdr-142 vdr: [2258] ERROR: 19932 ring buffer overflows (3747216 bytes dropped) Mar 22 12:25:29 vdr-142 vdr: [2258] ERROR: 21852 ring buffer overflows (4108176 bytes dropped) Mar 22 12:25:35 vdr-142 vdr: [2258] ERROR: 19089 ring buffer overflows (3588732 bytes dropped) ----snap----------------------------------------------------------------
BTW: I just tried to replay this recording using vlc, and the TS file is also corrupted. Not only the index file stops, the ts file is also defect.
Well, if there are buffer overflows and the TS data is corrupted, it's no wonder the index file stops growing.
Does this happen on all channels, or only on some (or even only on a particular one)?
Klaus
Well, if there are buffer overflows and the TS data is corrupted, it's no wonder the index file stops growing.
Does this happen on all channels, or only on some (or even only on a particular one)?
Well, I've tried it on several channels ARD,ZDF,arte,Das Vierte,MünchenTV and so on.
Channels with high (6-8MBits/s) and normal (3-4MBits/s) bandwidth.
Every channel has the same problem.
Using hard,sync,intr on NFS shares causes in a reduction of permanent write-throughput to roundabout 4,5 to 5,0 Mega Bytes/s. That should be enough to record two streams at the same time.
Using soft,async,nointr on NFS shares causes in a higher permanent write-throughput to roundabout 9,5 to 10,0 Mega Bytes/s.
With this setting, I can make two records at the same time, but a third one (record or playback or cutting) does not work well.
-Günter
-- This message was scanned by ESVA and is believed to be clean.
Well, if there are buffer overflows and the TS data is corrupted, it's no wonder the index file stops growing.
Does this happen on all channels, or only on some (or even only on a particular one)?
Additional infos:
I recompiled older versions beginning at 1.7.3 and made some tests.
1.7.3 acts exactly like 1.7.4 one record -> bufferoverflow The native networkthroughput is between 2,8-3,7 Mega Bytes /s for one stream. -> Much to high I think!
1.7.2 seems to be o.k. one record -> no problem -> throughput is between 0,8-1,2 MB/s two records -> no problem -> throughput is between 1,8-2,6 MB/s three records -> no problem -> throughput is between 2,8-4,0 MB/s four records -> PES pkt shortened -> throughput is between 3,8-5,4 MB/s
All tested while using the "slower" NFS share.
Why does one TS stream require a three times higher bandwidth in 1.7.3/1.7.4 than a PES stream in 1.7.2 do?
I think we're getting closer to the problem :-)
-Günter
-- This message was scanned by ESVA and is believed to be clean.
On 22.03.2009 15:36, Niedermeier Günter wrote:
Well, if there are buffer overflows and the TS data is corrupted, it's no wonder the index file stops growing.
Does this happen on all channels, or only on some (or even only on a particular one)?
Additional infos:
I recompiled older versions beginning at 1.7.3 and made some tests.
1.7.3 acts exactly like 1.7.4 one record -> bufferoverflow The native networkthroughput is between 2,8-3,7 Mega Bytes /s for one stream. -> Much to high I think!
1.7.2 seems to be o.k. one record -> no problem -> throughput is between 0,8-1,2 MB/s two records -> no problem -> throughput is between 1,8-2,6 MB/s three records -> no problem -> throughput is between 2,8-4,0 MB/s four records -> PES pkt shortened -> throughput is between 3,8-5,4 MB/s
All tested while using the "slower" NFS share.
Why does one TS stream require a three times higher bandwidth in 1.7.3/1.7.4 than a PES stream in 1.7.2 do?
A TS recording is only marginally more data than the same length recording in PES. I don't think that recording in TS should require so much more bandwidth than PES.
There must be an other problem that's causing this, but since this doesn't happen here on my system, I'm afraid you'll need to do the debugging ;-)
Just to be sure: this *is* an unpatched version 1.7.4. we're talking about, right?
Klaus
A TS recording is only marginally more data than the same length recording in PES. I don't think that recording in TS should require so much more bandwidth than PES.
That's clear to me.
There must be an other problem that's causing this, but since this doesn't happen here on my system, I'm afraid you'll need to do the debugging ;-)
No problem, I'll try "debugging" the problem with my tools :-), but I have no chance "debugging" in the code. :-(
Just to be sure: this *is* an unpatched version 1.7.4. we're talking about, right?
Yes, of course. It's installed from a fresh download, also 1.7.3 and 1.7.2
-Günter
-- This message was scanned by ESVA and is believed to be clean.
A TS recording is only marginally more data than the same length recording in PES. I don't think that recording in TS should require so much more bandwidth than PES.
That's clear to me.
There must be an other problem that's causing this, but since this doesn't happen here on my system, I'm afraid you'll need to do the debugging ;-)
No problem, I'll try "debugging" the problem with my tools :-), but I have no chance "debugging" in the code. :-(
Further investigation reveals ringbuffer overflows reported in syslog as soon as recording starts.
I can put some trace into the code as Klaus has suggested. Assuming that what I am experiencing with NTFS is another manifestation of the problem Gunter is seeing with NFS - What would be useful?
Just to be sure: this *is* an unpatched version 1.7.4. we're talking about, right?
Yes, of course. It's installed from a fresh download, also 1.7.3 and 1.7.2
Mine is also a clean 1.7.4 install. Note that I am using the latest Liplianin drivers with two patches applied - av7110_ts_replay_1.diff and av7110_v4ldvb_api5_audiobuff_test_1.diff applied.
Mark.
-Günter
-- This message was scanned by ESVA and is believed to be clean.
This is an email from Fujitsu Australia Limited, ABN 19 001 011 427. It is confidential to the ordinary user of the email address to which it was addressed and may contain copyright and/or legally privileged information. No one else may read, print, store, copy or forward all or any of it or its attachments. If you receive this email in error, please return to sender. Thank you.
If you do not wish to receive commercial email messages from Fujitsu Australia Limited, please email unsubscribe@au.fujitsu.com
On 23.03.2009 01:46, Hawes, Mark wrote:
A TS recording is only marginally more data than the same length recording in PES. I don't think that recording in TS should require so much more bandwidth than PES.
That's clear to me.
There must be an other problem that's causing this, but since this doesn't happen here on my system, I'm afraid you'll need to do the debugging ;-)
No problem, I'll try "debugging" the problem with my tools :-), but I have no chance "debugging" in the code. :-(
Further investigation reveals ringbuffer overflows reported in syslog as soon as recording starts.
I can put some trace into the code as Klaus has suggested. Assuming that what I am experiencing with NTFS is another manifestation of the problem Gunter is seeing with NFS - What would be useful?
Let me try increasing the amount of data written at once first.
Klaus
...is there switch, or simple possibility to change recording format from TS to PES in 1.7.4 - just for a simple verification?
BTW:
In your environment, do you stream over net or local disk? Because streaming to a local (linux)disk makes no problem, and would be hard to produce it, or measure the diskload.
-Günter
-- This message was scanned by ESVA and is believed to be clean.
On 22.03.2009 19:06, Niedermeier Günter wrote:
...is there switch, or simple possibility to change recording format from TS to PES in 1.7.4 - just for a simple verification?
Sorry, PES recordign has been completely removed from VDR.
In your environment, do you stream over net or local disk? Because streaming to a local (linux)disk makes no problem, and would be hard to produce it, or measure the diskload.
My test machine is diskless, so recordings go to an NFS mount.
Klaus
There must be an other problem that's causing this, but since this doesn't happen here on my system, I'm afraid you'll need to do the debugging ;-)
Which changes have been made between 1.7.2 and 1.7.3 in file writing mechanism?
Not codechanges, because I dont understand them, but in words please. E.g. blocksize changed from xxx to yyy changed algo. changed cache or something else which can influence the performance.
I found out, that in 172 the most time up to 20 stream data blocks are transmitted via NFS between one "NFS WRITE CALL / WRITE REPLAY" and "NFS COMMIT CALL / COMMIT REPLAY" combination and the next one.
In 173/174 the number of stream data blocks decreases to an amount of 5 blocks maximal. Therefor the number of "NFS WRITE CALL / WRITE REPLAY" and "NFS COMMIT CALL / COMMIT REPLAY" combinations increases up to 4 to 5 times higher than in 172.
This produces an enormous overhead, and this overhead could be reasonable for the two MegaByte/s networkoverload above the normal load with 1 MB/s per stream.
Perhaps Klaus, you have an idea.
-Günter
-- This message was scanned by ESVA and is believed to be clean.
On 23.03.2009 01:09, Niedermeier Günter wrote:
There must be an other problem that's causing this, but since this doesn't happen here on my system, I'm afraid you'll need to do the debugging ;-)
Which changes have been made between 1.7.2 and 1.7.3 in file writing mechanism?
Not codechanges, because I dont understand them, but in words please. E.g. blocksize changed from xxx to yyy changed algo. changed cache or something else which can influence the performance.
I found out, that in 172 the most time up to 20 stream data blocks are transmitted via NFS between one "NFS WRITE CALL / WRITE REPLAY" and "NFS COMMIT CALL / COMMIT REPLAY" combination and the next one.
In 173/174 the number of stream data blocks decreases to an amount of 5 blocks maximal. Therefor the number of "NFS WRITE CALL / WRITE REPLAY" and "NFS COMMIT CALL / COMMIT REPLAY" combinations increases up to 4 to 5 times higher than in 172.
This produces an enormous overhead, and this overhead could be reasonable for the two MegaByte/s networkoverload above the normal load with 1 MB/s per stream.
Perhaps Klaus, you have an idea.
I believe I do. With PES recordings, data was written to the file in larger chunks, while with TS recordings it is written in blocks of 188 byte (TS_SIZE). I'll chnage cFrameDetector::Analyze() to handle more data at once. Will try to provide a patch for testing tonight.
Klaus
On 23.03.2009 08:35, Klaus Schmidinger wrote:
On 23.03.2009 01:09, Niedermeier Günter wrote:
There must be an other problem that's causing this, but since this doesn't happen here on my system, I'm afraid you'll need to do the debugging ;-)
Which changes have been made between 1.7.2 and 1.7.3 in file writing mechanism?
Not codechanges, because I dont understand them, but in words please. E.g. blocksize changed from xxx to yyy changed algo. changed cache or something else which can influence the performance.
I found out, that in 172 the most time up to 20 stream data blocks are transmitted via NFS between one "NFS WRITE CALL / WRITE REPLAY" and "NFS COMMIT CALL / COMMIT REPLAY" combination and the next one.
In 173/174 the number of stream data blocks decreases to an amount of 5 blocks maximal. Therefor the number of "NFS WRITE CALL / WRITE REPLAY" and "NFS COMMIT CALL / COMMIT REPLAY" combinations increases up to 4 to 5 times higher than in 172.
This produces an enormous overhead, and this overhead could be reasonable for the two MegaByte/s networkoverload above the normal load with 1 MB/s per stream.
Perhaps Klaus, you have an idea.
I believe I do. With PES recordings, data was written to the file in larger chunks, while with TS recordings it is written in blocks of 188 byte (TS_SIZE). I'll chnage cFrameDetector::Analyze() to handle more data at once. Will try to provide a patch for testing tonight.
Here's a quick shot - totally untested (no time, sorry). Please try it and let me know if it helps.
Klaus
Here's a quick shot - totally untested (no time, sorry). Please try it and let me know if it helps.
Hi,
I've tried it: files are created, but with zero filesize. Only info is correct.
After a few seconds vdr is restarting with an "emergency exit!".
-Günter
-- This message was scanned by ESVA and is believed to be clean.
On 23.03.2009 21:42, Niedermeier Günter wrote:
Here's a quick shot - totally untested (no time, sorry). Please try it and let me know if it helps.
Hi,
I've tried it: files are created, but with zero filesize. Only info is correct.
After a few seconds vdr is restarting with an "emergency exit!".
Well, then I guess I do need to spend a little more time on this ;-)
Klaus
On 23.03.2009 22:13, Klaus Schmidinger wrote:
On 23.03.2009 21:42, Niedermeier Günter wrote:
Here's a quick shot - totally untested (no time, sorry). Please try it and let me know if it helps.
Hi,
I've tried it: files are created, but with zero filesize. Only info is correct.
After a few seconds vdr is restarting with an "emergency exit!".
Well, then I guess I do need to spend a little more time on this ;-)
Just another quick thing to test: please add
} Data += TS_SIZE; <========== this line Length -= TS_SIZE; Processed += TS_SIZE; } return Processed; }
to the end of cFrameDetector::Analyze().
Klaus
Just another quick thing to test: please add
} Data += TS_SIZE; <========== this line Length -= TS_SIZE; Processed += TS_SIZE; }
return Processed; }
to the end of cFrameDetector::Analyze().
...looks good at all!
I've tested streaming of four channels via the "slow" NFS share without any problem until now.
The overall netload was at a maximum of 4,5 MB/s for four streams.
A single stream (ARD) needs roundabout 1,1 MB/s.
...Die ARD'ler müssen Geld und Bandbreite ohne Ende haben!!!
-->15Min/850MB TS File
Wonderful !
One question (OT): is it normal that replaying a PES record causes in a very high CPU load on 1.7.2 and higher? -> 80-90% on a P4/2400
Replaying TS produces "no" CPU load.
-Günter
-- This message was scanned by ESVA and is believed to be clean.
One question (OT): is it normal that replaying a PES record causes in a very high CPU load on 1.7.2 and higher? -> 80-90% on a P4/2400
Replaying TS produces "no" CPU load.
Forget it! This seems to be fixed too since your patch.
Could that be possible?
-Günter
-- This message was scanned by ESVA and is believed to be clean.
On 03/24/09 01:09, Niedermeier Günter wrote:
One question (OT): is it normal that replaying a PES record causes in a very high CPU load on 1.7.2 and higher? -> 80-90% on a P4/2400
Replaying TS produces "no" CPU load.
Forget it! This seems to be fixed too since your patch.
Could that be possible?
I don't think so. The modified code is not involved in replaying, only in recording.
Klaus
Replaying TS produces "no" CPU load.
Forget it! This seems to be fixed too since your patch.
Could that be possible?
I don't think so. The modified code is not involved in replaying, only in recording.
However, currently 174 works without high CPU load in my environment. I had no chance to reproduce it again yesterday. In 173 or 172 I can do.
Is this a known problem?
-Günter
-- This message was scanned by ESVA and is believed to be clean.
On 24.03.2009 09:22, Niedermeier Günter wrote:
Replaying TS produces "no" CPU load.
Forget it! This seems to be fixed too since your patch.
Could that be possible?
I don't think so. The modified code is not involved in replaying, only in recording.
However, currently 174 works without high CPU load in my environment. I had no chance to reproduce it again yesterday. In 173 or 172 I can do.
Is this a known problem?
Not to my knowledge. But since it works better in version 1.7.4, that's a step in the right direction, I'd say ;-)
Klaus
One question (OT): is it normal that replaying a PES record causes in a very high CPU load on 1.7.2 and higher? -> 80-90% on a P4/2400
Replaying TS produces "no" CPU load.
Forget it! This seems to be fixed too since your patch.
Could that be possible?
I don't think so. The modified code is not involved in replaying, only in recording.
Klaus
I too can confirm that this has fixed my problem recording to an NTFS Partition. With the patches applied I have successfully recorded two streams simultaneously while playing a third.
Thanks.
Mark.
This is an email from Fujitsu Australia Limited, ABN 19 001 011 427. It is confidential to the ordinary user of the email address to which it was addressed and may contain copyright and/or legally privileged information. No one else may read, print, store, copy or forward all or any of it or its attachments. If you receive this email in error, please return to sender. Thank you.
If you do not wish to receive commercial email messages from Fujitsu Australia Limited, please email unsubscribe@au.fujitsu.com
Klaus Schmidinger kirjoitti:
On 23.03.2009 22:13, Klaus Schmidinger wrote:
On 23.03.2009 21:42, Niedermeier Günter wrote:
Here's a quick shot - totally untested (no time, sorry). Please try it and let me know if it helps.
Hi,
I've tried it: files are created, but with zero filesize. Only info is correct.
After a few seconds vdr is restarting with an "emergency exit!".
Well, then I guess I do need to spend a little more time on this ;-)
I hope the patch does what I thought it would: collects writes in larger chunks. For a networked application, the key performance bottleneck is the number of needed transactions. See e.g. http://portal.acm.org/citation.cfm?id=1066051.1066069 (and the PDF file in there).
yours, Jouni
On 24.03.2009 17:05, Jouni Karvo wrote:
Klaus Schmidinger kirjoitti:
On 23.03.2009 22:13, Klaus Schmidinger wrote:
On 23.03.2009 21:42, Niedermeier Günter wrote:
Here's a quick shot - totally untested (no time, sorry). Please try it and let me know if it helps.
Hi,
I've tried it: files are created, but with zero filesize. Only info is correct.
After a few seconds vdr is restarting with an "emergency exit!".
Well, then I guess I do need to spend a little more time on this ;-)
I hope the patch does what I thought it would: collects writes in larger chunks. For a networked application, the key performance bottleneck is the number of needed transactions. See e.g. http://portal.acm.org/citation.cfm?id=1066051.1066069 (and the PDF file in there).
That's exactly what the fix does - and apparently it works, judging from the feedback.
Klaus