Does anyone have experience with a heterogeneous blend of servers in a Hyper-V failover cluster? We have a cluster with blended generations of Proliants (DL360 G9s and DL360 G10s), and I'm considering introducing Dell servers into the mix due largely to availability and pricing. Is this a bad idea, and why?
Systemspoet's questions
Should Hyper-V replication carry changes to underlying VM settings, such as increasing memory, adding a vCPU, or in this specific case, adding a NIC?
I'm running Server 2016 Datacenter edition and Hyper-V replication is via Failover Cluster Manager and a replication broker. I added a NIC to two VM's this morning, verified that replication is working, and the replica's do not have the new NIC after a half-hour and multiple successful replication cycles.
I've 13 Hyper-V nodes in a Microsoft Failover Cluster. About 50% of our guests are Linux and work best with static MAC addresses. Our Windows guests work either way, but to keep things simpler we've been setting them to static mac addresses as well.
Our procedure has been: provision VM, before installing OS, turn VM on, turn it off, change the network adapter to Static, keeping the autogenerated mac address.
The issue is:
- Create guest on Node1. It gets a MAC inside Node1's MAC range.
- Move guest to Node2. No problem.
- Create new guest on Node1. It gets a MAC inside Node1's range. I thought for sure that Failover Cluster was would be smart enough to check that there was no conflict with ANY guest but it just picks a random one from THAT NODE's range that isn't used by any guests on THAT NODE, not elsewhere in the cluster.
- I was depressed to find that this has actually created MAC conflicts where it stumbles on the same address it's already assigned to a guest that has migrated to a different node.
The short term solution's easy, we just run a PowerShell command to xref the MAC addresses over the entire cluster, but what's the long-term solution here? Should we check each autogenerated MAC address across all of the VM's in our pool? If we give each Hyper-V node the same pool, will it check across the entire cluster, or will we have even more collisions? Would SCVMM help us here, or make things worse?
I'm considering a Hyper-V replication scenario in which:
SiteA is a Failover Cluster of 2012R2 machines containing a number Linux guests serving as production databases and application servers.
SiteB is principally a replication target for disaster recovery purposes. In addition, it's been suggested that we may need to run one very small utilization Linux guest in the SiteB building. Is it possible to run a live machine on a replication target, assuming that the replication target is suitably sized? Are there any specific drawbacks to doing so, assuming it's even possible?
We need to set a default quota for all users. From what I can see from man 8 xfs_quota
, you can only set quotas for individual users. I need to set a quota that applies to everyone without having to enumerate each user.
I have three linux-based mail routers that run postfix and relay mail to our on-premise exchange server as well as to outlook.com, splitting the mail based on ldap atttributes. What I've observed sporadically since upgrading this spring from Exchange 2007 to 2010 is that all three of the mail relays will, for about 20 minutes, fail to connect to exchange.
Postfix logs it as "lost connection with exchange.contosso.edu" ; this problem almost always occurs to all three mail relays at the same time, and lasts for slightly under 20 minutes. If I can catch it while it's occuring, and I manually do "telnet exchange.contosso.edu 25" from one mail relay and force a message through (helo, mail from, rcpt to, data, etc), then it clears that relay up.
The exchange "server" is actually two machines with the HT role on them, load balanced via windows NLB.
I've worked pretty hard to figure out what's happening from the postfix side and I can't see any evidence of any misbehavior. My question is, how do I attack the problem from the exchange side? Is there a connection log, or a debug setting, or something I can do to log all of the inbound connections and tell me what's causing exchange to drop them?
We're an all Proliant shop with around 50 servers, mostly DL360s and DL380, from G5's through G7's. We just got our first two G8's in and went to rack them. We were stunned to find out that the new cable management arms protrude almost 1 inch deeper into the rack then previous iterations of the Proliant line.
Unfortunately that causes them to occupy the same space as the PDU's in our APC racks. In a non-densely populated section of rack that's no biggie, but in a densely populated section it's impossible to get the cable arm into place without dislodging another machine's power. Has anyone else run into this? Obviously racking machines without cable management arms is not an option. I supposed we could reconfigure our racks but that's a nightmare.