I have a distributed VDR system in my house with a lot of disks that are NFS mounted by VDR PCs in two rooms. In order to conserve energy, I have used hdparm to set a spin down delay after which the disks turn themselves off. When /video/.update is touched (one of the VDR PCs creates/deletes/ edits a recording, I move a recording into a folder, etc.) or when the vdr program is started, it reads all directories from all disks. Most of these directories are unchanged, so there really is no need to spin up the disk just to read a few inode entries. However, my observation is that they are always spun up. So my questions are:
1) Is there a bug in the linux kernel that makes it spin up the disk needlessly even if the data are still in the cache?
2) Is there a way to configure the kernel, so the inode entries are locked in the cache or at least get a much higher cache priority than ordinary data?
Thanks and Cheers, Carsten.
Carsten Koch wrote:
I haven't tried or read it myself, but this: Documentation/laptop-mode.txt might contain the information you need. At least your problems sounds like the typical laptop-problem to me.
Quoting Carsten Koch:
You may have a look at the following page too: http://gentoo-wiki.com/HOWTO_HDD_spindown_small_server
Stefan Lucke
Carsten Koch wrote:
The problem is that the nfs server reads (and writes, when editing a recording on a client) large amounts of data and the metadata (which hasn't been accessed since the last /video scan) gets evicted from the cache. This usually does not happen w/ a local vdr and on the client vdrs because fadvice() makes sure that the cached video data gets freed before the system experiences any memory pressure. But fadvise() only helps locally, ie the nfs client drops the data, but the nfs server does not. I've gotten used to this; at one point, when doing the fadvise changes, i was going to implement O_DIRECT-based i/o, but decided it wasn't worth it, because of the complexity and small gains (you can't really avoid a data copy w/ the current vdr design (alignment restrictions) and the code still has to handle all the cases where O_DIRECT and/or AIO isn't available, older kernels, other filesystems etc). What you could try is:
o play with /proc/sys/vm/vfs_cache_pressure; try decreasing it. significantly. what you describe as "2)" above would be "vfs_cache_pressure=0".
Haven't tried this myself; does it help? If not the other options would probably be a bit more complex (like modifying nfsd or writing a dedicated fuse fs).
artur
Artur Skawina wrote: ...
...
o play with /proc/sys/vm/vfs_cache_pressure; try decreasing it. significantly. what you describe as "2)" above would be "vfs_cache_pressure=0".
Thanks, sounds like it is exactly what I need!
I have set /proc/sys/vm/vfs_cache_pressure to 1 for now. I'll observe the effects for a while.
Carsten.
Hi,
Carsten Koch wrote:
However, my observation is that they are always spun up. So my questions are:
did you had a look to noflushd?
http://noflushd.sourceforge.net/
Cheers, Andreas