On Mon, Feb 18, 2008 at 03:03:44PM +0100, Andi Kleen wrote: > Tomasz Chmielewski writes: > > > > Is it normal to expect the write speed go down to only few dozens of > > kilobytes/s? Is it because of that many seeks? Can it be somehow > > optimized? > > I have similar problems on my linux source partition which also > has a lot of hard linked files (although probably not quite > as many as you do). It seems like hard linking prevents > some of the heuristics ext* uses to generate non fragmented > disk layouts and the resulting seeking makes things slow. ext3 tries to keep inodes in the same block group as their containing directory. If you have lots of hard links, obviously it can't really do that, especially since we don't have a good way at mkdir time to tell the filesystem, "Psst! This is going to be a hard link clone of that directory over there, put it in the same block group". > What has helped a bit was to recreate the file system with -O^dir_index > dir_index seems to cause more seeks. Part of it may have simply been recreating the filesystem, not necessarily removing the dir_index feature. Dir_index speeds up individual lookups, but it slows down workloads that do a readdir followed by a stat of all of the files in the workload. You can work around this by calling readdir(), sorting all of the entries by inode number, and then calling open or stat or whatever. So this can help out for workloads that are doing find or rm -r on a dir_index workload. Basically, it helps for some things, hurts for others. Once things are in the cache it doesn't matter of course. The following ld_preload can help in some cases. Mutt has this hack encoded in for maildir directories, which helps. > Also keeping enough free space is also a good idea because that > allows the file system code better choices on where to place data. Yep, that too. - Ted