This is a Canonical Question about choosing a network switch for a datacentre
When shopping for a networking switch that's going to be going into the top of a datacentre rack, what specific things should I be looking for?
i.e. What makes a $3,000 Cisco switch that requires annual maintenance a smarter buy than a $300 Netgear Smart switch with a lifetime warranty?
Context is everything... There's no blanket answer.
If you're trying to ask: "what differentiates an expensive switch from a low-end switch?" or
"is there a reliability difference between a high-end switch and an inexpensive switch?"
The answers are "feature-set" and "maybe", respectively... I've used a $40,000 switch for the specific purpose of connecting two 10GbE WAN connections in a data center cabinet. I've also seen $100 unmanaged Netgear FS524 switches run the "core" of a $400 million/year company for 9 years, with no reliability issues...
"You're only using me for my 10GbE ports, routing capabilities and good looks..." - Cisco 4900M.
If you're looking for a rule or general advice that can be applied across the board, there are more questions that deserve attention:
For my money, the one absolutely mandatory thing is that it be remotely manageable. Sooner or later, you'll get a duplicate IP address, or duplex mismatch, or hard-to-track-down top-talker, or some other problem that can be answered in seconds with a manageable switch. If you can't ask your switch what it thinks about packet counts, errors on ports, where the MAC addresses are, that kind of thing, and if you can't do it remotely, you'll be sitting in a data centre for hours unplugging cables one at a time, to see if the problem has gone away.
The remote manageability should be via CLI, not just a stupid web browser, because the networking gods will cause your switch to fail when you're in the middle of nowhere, and only able to connect to the DC over a slow EDGE connection, via a stupid web proxy that insists all graphics are bad.
After that, it depends on whether the switch needs to be configured or not. If it doesn't need significant configuration (no VLANs!) then anything that's half-decent will do, because when it fails, you'll replace it with whatever seems nearest to half-decent at the time.
If configuration is required, there's definite value to buying a long-living, UI-stable brand like CISCO, because you're most likely to be able to take config for the old switch out of your config repository and blow it onto the new switch with minimal problems (another reason why a CLI is good; web configs can't be trivially saved in, or restored from, a repository).
The final thing to consider is modularity. I've just decommissioned a data centre deployment that I built and installed over ten years ago. We went for very expensive, large (4U) modular HP switches, so that if one failed, we could replace the faulty blade without downtime. During those ten years, no blade failed; the modular switches were a waste of time and rack space. If I had it to do over again, I'd have used high-port-density switches instead; still high-quality, to minimise the chance of failure, but now you can get 48 ports in 1U, it's an odd rack that needs more than 96 ports, even with dual-connect for everything.
Depending on the application, power consumption may also matter. Power in a colo space can get expensive fast, and you don't want to use a 250W switch (e.g. Procurve 6600-48G) where a 50W switch will do (e.g. Procurve 2920-48G).
There are a lot of difference between a $3000 and a $300 switch (ex.):
Just my 2cents.