I have a following configuration: two Windows Server 2008 non-virtual machines (a database server and an application server) are connected to the same network through two network adapters, a fast one (1 Gbps), and a slow one (100 Mbps). They have a different IP, but share the same configuration.
The application server requests data from another server either from the database or from file shares. It connects to the shares using the machine name: \\DataServer01\<FileName>
. The first IP associated with \\DataServer01
in DNS server is 192.168.1.19 (used by 1 Gbps adapter). I want it to be used every time, and the slow one to be used only if the fast one fails.
Sometimes, the application server downloads the files from the share at the maximum speed, but sometimes, the transfer still uses the fast 192.168.1.22 at application server side, but the slow 192.168.1.18 at database server side, limiting the speed to ≈11 MB/s.
I don't have precise metrics, but from what I've seen, I imagine that it fails to use the default connection half of the time, randomly.
If I specify \\192.168.1.19\<FileName>
instead of \\DataServer01\<FileName>
, everything works well and at maximum speed.
How to diagnose what's happening? Is there a policy which forces Windows to choose random network adapters when sending files from a share? Are there settings to check in DNS server (a role of Windows Server 2008)?
As devicenull said, this is a problem of name resolution.
What I suspect is happening is that you are using NetBIOS name resolution. Without a WINS server running on the network, this name resolution works via network broadcast.
Whichever card on the server happens to reply to this broadcast first is the one that will get used, until the name resolution cache entry expires (10 or 15 minutes I think), and then there will be another broadcast.
You can read more about this here: http://www.techrepublic.com/article/how-netbios-name-resolution-really-works/5034239
Well, first off it's not really redundant if the second connection can't deal with the load of the primary connection. What's going to happen when the primary card fails when it was pushing 500mbit/sec of traffic? Your secondary card isn't going to be able to keep up with it. Why are you doing this? Perhaps you would be better served by some other technology.
The issue you are seeing here is probably because you are using the hostnames instead of IP addresses. Whichever card happens to register with the network first would be the one assigned the hostname. This would be random (it depends on timing of when the computer was turned on, when the cards came up, etc).
There's no real clever solution here. Unless you can convince the computers to always announce their hostname on the primary adapter first, you aren't going to be able to have this work as you want. Gigabit ethernet adapters are around $13 each, why not just pick up two more and stop using the 100mbit oens.
First, your adapters aren't redundant. You have two separate adapters, that not even close to the same thing as redundant. You need to setup teaming on the adapters. They should be from the same vendor, using that vendors teaming tools. They should also be of the same speed, it may work with different speeds but there's reasons this sort of setup isn't recommended.
Once they're teamed you can specify a primary with failover redundancy. Your current setup is not failure tolerant, you can quickly test this by unplugging the NIC being used (testing like this is part of setting up a server, any feature you think you have that hasn't been tested should be assumed non-functional: redundancy, backup solutions, performance, everything).