Artur Skawina schrieb:
well, vdr w/ the recent cUnbufferedFile changes was flushing the data buffers in huge burst; this was even worse than slowly filling up the caches -- the large (IIRC ~10M) bursts caused latency problems (apps visibly freezing etc).
Does this freezing apply to local disk access or only to network filesystems. My personal VDR is a dedicated to VDR usage system which uses a local hard disk for storage. So I don't have applications parallel to vdr which can freeze nor I can actually test behaviour on network devices. Seems you have both of this extra features so it would be nice to know more about this.
For local usage I found that IO interruptions of less then a second (10 MB burst writes on disks which give a hell lot more then 10MB/sec) have no negative side effects. But I can imagine that on 10Mbit ethernet it could be hard to have these bursts ... I did not think about this when writing the initial patch ...
This patch makes vdr use a much more aggressive disk access strategy. Writes are flushed out almost immediately and the IO is more evenly distributed. While recording and/or replaying the caches do not grow and when vdr is done accessing a video file all cached data from that file is dropped.
Actually with the patch you attached my cache _does_ grow. It does not only grow - it displaces the inode cache, to avoid this the initial patch has been created. To make it worse - when cutting a recording and have the newly cut recording replayed at the same time I have major hangs in replay.
I had a look at your patch - it looked very well. But for whatever reason it doesn't do what it is supposed to do at my VDR. I currently don't know why it doesn't work here for replay - the code there looked good.
I like the heuristics you used to deal with read ahead - but maybe these lead to the leaks I experience here. I will have a look at it. Maybe I can find out something about it ...
I've tested this w/ both local disks and NFS mounted ones, and it seems to do the right thing. Writes get flushed every 1..2s at a rate of .5..1M/s instead of the >10M bursts.
To be honest - I did not found the place where writes get flushed in your patch. posix_fadvise() doesn't seem to influence flushing at all. It only applies to already written buffers. So the normal write strategie is used with your patch - collect data until the kernel decides to write it to disk. This leads to "collect about 300MB" here and have an up to 300MB burst then. This is a bit more heavy then the 10MB bursts before ;)
Regards Ralf