This is actually a server at home, but I felt it was sufficiently complicated as to not have it on SuperUser and could easily apply to a professional situation.
I have a file server running Debian (Lenny 5.0.4), and it has an XFS LVM on top of a RAID 5 with the OS drive separate from the RAID. It's also running apache, samba, and postgresql. Side note: before the RAID5 critics crucify me, I'm using RAID5 because I get more bang for the buck on raw drive space, and still have some fault tolerance.
When the box is started (via shutdown or reboot) reading/writing to it's samba share maxes out the gigabit network connection. Over time, this slowly degrades eventually becoming < 10MB/s; however, when rebooted the speed returns to maxing out the connection.
Why is this happening, and is there a way to 'clear' out whatever's causing it without taking the server down?
Thanks in advance!
EDIT: To answer @LapTop006's question, the output of cat /proc/mdstat is the same after it reboots and when it's slow:
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[0] sda[5] sdb[4] sdf[3] sdg1[2] sde1[1]
4883799680 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
unused devices: <none>
According to xfs_db's frag command:
actual 58969, ideal 23904, fragmentation factor 59.46%
EDIT 2: I'm using the standard Debian kernel. cat /etc/fstab outputs this for my OS drive and raid:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/sda1 / ext3 errors=remount-ro 0 1
/dev/mapper/oomox-lvm /raid xfs defaults 0 2
To be honest I'm not exactly the biggest Linux guru and I didn't make the raid or lvm via command-line (i.e. mkfs_xfs); I used the UI based Debian RAID install setup thing when you first install the OS, and only used the command-line when I needed to add drives to the array.
When it starts slowing down again I'll post the iostat output.
EDIT 3:
When slow or fast, the iostat output shows bytes read and written equally among all the drives. I also tried setting
socket options = TCP_NODELAY
in the samba config as per @Avery Payne's advice, but it was still slow. However at least the problem has been narrowed down, since only restarting samba fixed the issue. This is pretty odd though, since I've never had this issue until somewhat recently.
FINAL EDIT: I tried @David Spillett's suggestion of running
time dd if=/dev/sda of=/dev/null
For each drive when it's slow to see if there's any difference to when it's fast, and there isn't. So, the problem is clearly with Samba.
I'm awarding the correct answer to @Avery Payne. Although @David Spillett's answer has a great sleuth of troubleshooting techniques, technically @Avery Payne pointed me in the most correct direction of solving this issue. I'll post if I ever find the final solution to this.
Thanks everyone!