Mailing List archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[vdr] Re: performance during cutting
On 26 Mar, Emil Naepflein wrote:
>> I already tried to slow down the thread by inserting
>> usleep()-calls with several times,
>
> I have also added usleep calls into the cutting thread until the cutting
> rate was not more than about 3 MB/s. With this the response time and
> cutting time is acceptable.
So my idea was to tune the values for the usleeps automatically to a
value that limits the impact of cutting to the foreground but does not
slow down cutting unecessary.
>> but I think
>> the main problem is the caching strategy of the kernel/filesystem.
>
> This is one of the problems. The other problem is the cpu usage, at
> least with slower cpus. Please run top during cutting and you will see.
>
> I am not really sure if this will help. That is certainly a rather
> complicated change. We don't know whether this will improve the
> responsivness unless someone does experiments.
ok, I will try to collect some statistical data in the next days.
> If really the write back is the problem then fsync every few MB may
> avoid the use of buffers which contain valuable information.
>
> Direct write-through to disk without using the buffer pool may be even
> better. The same is true with reading the files. Reading files larger
> than the memory makes the buffer cache pretty useless. For this use a
> kind of raw mode for files would be nice. You use only a fixed amount of
> buffering to do write-behind and read-ahead just enogh to keep the data
> flowing. The buffers used should not change but should be reused. With
> this all other buffers containing important information are left
> untouched.
Ok, this avoids the waste of file buffers and so it accelerates the
access to files that were in a file buffer before (e.g. directories of
recordings and especially I-node-lists of big files).
On the other hand the high bandwith transfer of the cutting thread
itself also competes with all other (e.g. foreground) disk accesses.
So there is a lack of fairness (large transfers block small ones).
The problem is that you canīt set priorities for access to the ressource
"bandwith to filesystem", so there are no real-time assurances (only
unfair best effort approaches) in the access to a file under linux.
Joachim
--
--------------------------------------------------
Joachim Thees <thees@informatik.uni-kl.de>
Univ. of Kaiserslautern, Computer Networks Group
--------------------------------------------------
Home |
Main Index |
Thread Index