Without measuring throughput (it's at the top of the list; this is just theoretical), I want to know the most standard method for trunking VLANs on multiple Gigabit (GbE) switches to a core Layer 3 GbE switch.
EDIT: Say you have three VLANs, two are on their own 24x10/100/1000 L2 managed switch; the other VLAN is on the core L3 switch doing inter-VLAN routing:
VLAN10 (10.0.0.0/24) Servers: your typical Windows DC/file server, Exchange, and an Accounting/SQL server.
VLAN20: (10.0.1.0/24) Sales: needs access to everything on VLAN10; doesn't need access to VLAN30 and vice-versa.
VLAN30: (10.0.2.0/24) Support: needs access to everything on VLAN10; doesn't need access to VLAN20 and vice-versa.
Here's how I think this should work in my head:
Switch #1: Ports 2-20 are assigned to VLAN20; all the Sales workstations and printers are connected here. Optional 10GbE combo port #1 is trunked to L3 switch's 10 GbE combo port #1.
Switch #2: Ports 2-20 are assigned to VLAN30; all the Support workstations and printers are connected here. Optional 10GbE combo port #1 is trunked to L3 switch's 10 GbE combo port #2.
Core L3 switch: Ports 2-10 are assigned to VLAN10; all three servers are connected here.
With a standard 10/100 x 24 switch, it'll usually come with one or two 1 GbE uplink ports; carrying over this logic to a 10/100/1000 x 24, the "optional" 10 GbE combo ports that most higher-end switches can get shouldn't really be an option.
Keep in mind I haven't tested anything yet, I'm primarily moving in this direction for growth (don't want to buy 10/100 switches and have to replace those within a couple of years) and security (being able to control access between VLANs with L3 routing/packet filtering ACLs).
Does this sound right? Do I really need the 10 GbE ports? It seems very non-standard and expensive, but it "feels" right when you think about 40 or 50 workstations trunking up to the L3 switch over 1 GbE standard ports. If say 20 workstations want to download a 10 GB image from the servers concurrently, wouldn't the trunk be the bottleneck? At least if the trunk was 10 GbE, you'd have 10x1GbE nodes being able to reach their theoretical max.
What about switch stacking? Some of the D-Links I've been looking at have HDMI interfaces for stacking. As far as I know, stacking two switches creates one logical switch, but is this just for management I/O or does the switches use the (assuming it's HDMI 1.3) 10.2 Gbps for carrying data back and forth?
Start simple and iterate.
Trunk them together with single GigE. If you've got ~20 workstations and printers in a typical office, you're probably not hitting GigE wire speed at this point anyway.
Next step is to do GigE link aggregation (802.3ad). Try 2 GigE connections between each switch, or maybe 4. If your traffic is pretty evenly spread over your ~20 workstations / printers, you should get pretty good balance over these connections. A bonus is that you get some rudimentary failover. This solution will take you a long way (this is where my organization is at currently).
Buy the optional 10GigE SFPs and deploy those.
Buy bigger, more expensive switches.
What you're saying re: "trunking" the switches together, assigning ports to VLANs, etc, is basically sound. I wouldn't be using 10GbE in such a small network. If you need more than 1Gb bandwidth between the switches you might consider using link aggregation, which should still be rather cheaper than 10GbE ports.
The "HDMI" connections you're seeing to stack switches aren't HDMI. Stacking interfaces are universally proprietary (though they may use familiar connectors) and don't interoperate between switch vendors. A stacking interface brings some fraction of the internal switching fabric bandwidth out of the switch and into the fabric of another switch, typically at a bandwidth that can't be achieved through the access ports (40Gb, for example, on the Dell PowerConnect 6200-series). Generally speaking, stacking switches does cause them to behave as a single logical unit.
I have an impression that you're over-building for such a small network. When you do need to grow in the future (either in number of ports or throughput capacity) today's price-point will buy much more capable gear. Cost of money plays into this equation.
I'd question the need for VLANs in such a small infrastructure. You state that you want to limit communication between subnets, but I wonder if it's really so necessary as to warrant the time / money you're going to spend configuring it.
Measure what you've already got before you spend the money to buy something that isn't the right size for your needs.