Under what conditions does one start to consider subnetting a network?
I'm looking for a few general rules of thumb, or triggers based on measurable metrics that make subnetting something that should be considered.
Under what conditions does one start to consider subnetting a network?
I'm looking for a few general rules of thumb, or triggers based on measurable metrics that make subnetting something that should be considered.
Interesting Question.
Historically, prior to the advent of fully switched networks, the main consideration to breaking a network into subnets had to do with limiting the number of nodes in a single collision domain. That is, if you had too many nodes, your network performance would reach a peak and eventually would collapse under heavy load due to excessive collisions. The exact number of nodes that could be deployed depended on lots of factors, but generally speaking you could not regularly load the collision domain much beyond 50% of the total bandwidth available and still have the network be stable all the time. 50 nodes on the network was a lot nodes in those days. With heavy use users, you might have topped out at 20 or 30 nodes before needing to start subnetting things.
Of course, with fully switched full-duplex subnets, collisions are not a concern anymore and assuming typical desktop type users, you can typically deploy hundreds of nodes in a single subnet without any issues at all. Having lots of broadcast traffic, as other answers have alluded to, might be a concern depending on what protocols/applications you are running on the network. However, understand that subnetting a network does not necessarily help you with your broadcast traffic concerns. Many of the protocols use broadcasting for a reason - that is, when all the nodes on network actually need to see such traffic to implement the application level feature(s) desired. Simply subnetting the network doesn't actually buy you anything if the broadcasted packet is also going to need to forwarded over to the other subnet and broadcasted out again. In fact, that actually adds extra traffic (and latency) to both subnets if you think this through.
Generally speaking, today, the main reasons for subnetting networks has much more to do with organizational, administrative and security boundary considerations than anything else.
The original question asks for measurable metrics that trigger subnetting considerations. I am not sure there are any in terms of specific numbers. This is going to depend dramatically on the 'applications' involved and I don't think there is really any trigger points that would generally apply.
Relative to rules of thumbs in planning out subnets:
With all that said, adding subnets adds some level of administrative overhead and potentially causes problems relative to running out of node addresses in one subnet and having too many left in another pool, etc. The routing and firewall setups and placement of common servers in the network and such get more involved, that kind of thing. Certainly, each subnet should have a reason for existing that outweighs the overhead of maintaining the more sophisticated logical topology.
If it's a single site, don't bother unless you've got more than several dozen systems, and even then it's probably unnecessary.
These days with everyone using at least 100 Mbps switches and more often 1 Gbps, the only performance related reason to segment your network is if you're suffering excess broadcast traffic (i.e. > 2%, off the top of my head)
The main other reason is security, i.e DMZ for public facing servers, another subnet for finance, or a separate VLAN/subnet for VoIP systems.
Limiting scope for any compliance requirements you may have (i.e. PCI) is a pretty good catalyst to segment off some portions of your network. Segmenting off your payment acceptance/processing and finance systems can save money. But in general subnetting a small network will not gain you much in the way of performance.
Another reason would be Quality of Service related. We run voice and data vlans separately so that we can easily apply QoS to the voip traffic.
You know, I;ve been thinking about this question more. There are a ton of good reasons to design a new network using distinct networks (performance, security, QoS, limiting DHCP scopes, limiting broadcast traffic (which can be both security and performance related)).
But when thinking of a metric for redesigning just to subnet, and thinking of networks I've had to handle in the past, all I can think of is "wow, that'd have to one really messed up network to make me completely redesign it for subnetting". There are lots of other reasons - bandwidth, cpu utilization of the devices installed, etc. But just subnetting itself on a pure data network wouldn't usually buy a ton of performance
Security and quality mostly (as long as the network segment in question can support the nodes in question of course). A separate network for printer traffic, voice/phone, isolated departments like IT Ops and of course server segments, internet-facing segments (one per internet-facing service is popular today, not just "one dmz will do") and so on.
If you expect to scale up (you are building a network, not just 5 servers and that will we that) start routing as soon as possible. Way too many networks are unstable and hard to grow because they grew organically and have way too much layer 2 stuff.
Examples:
So in short: when you scale up to where you think you need spanning tree, please consider routing instead.
Personally, I like to take the layer 3 segmentation as close to the access switches as possible, because
If it comes to bigger/wider spread networks where two core switches/-routers are not sufficient, the normal redundancy mechanisms like VRRP have lots of drawbacks (traffic passes uplinks multiple times, ...) OSPF doesn't have.
There are probably a lot of other reasons to support the use-small-broadcast-domains-approach.
I think the scope of the organization matters a lot. If there are 200 hosts total or less on a network and traffic doesn't need to be segmented for any reason, why add the complexity of VLANs and subnets? But the larger the scope, the more it might make sense.
Splitting up networks that normally wouldn't need to be can make some things easier though. For instance, our PDUs that supply power to servers are in the same VLAN or subnet as the servers. This means our vulnerability scanning system used on our server range also scans PDUs. Not a huge deal, but we don't need PDUs to be scanned. Also it would be nice to DHCP the PDUs since they are a pain to configure, but since they are in the same VLAN as servers right now, that is not very feasible.
While we don't need another VLAN for the PDUs, it can make some things easier. And this gets into the whole more vs less VLANs argument that will continue on for ever.
Me, I just think have VLANs where it makes sense. If for instance we gave PDUs their own VLAN it doesn't mean we always have to give small groups of devices their own VLAN. But rather in this case it might make sense. If a group of devices doesn't need to have it's own VLAN and there are no advantages to doing so, then you might want to consider just leaving things as they are.