We have a 24U rack in our lab and I'm going to completely redesign it during the next weeks due to the scheduled big hardware upgrade.
I have a couple of the questions about the layout of the server components:
UPSes. At this moment all UPSes (we use non-rackmount) are installed on the shelf near the top of the rack. Some people suggest to always put UPSes on the bottom but I'm just afraid what if the nearest server fall on them? I've never had a problem when server is falling down but never say never. One story is when it just fall on the rack stand, another story is when it crash an equipment.
Switches. Usually people suggest to put them on the top of the rack, but I can see the only practical reason doing this: when I open rackmount KVM, it makes the switch front panel inaccessible, that's good because it usually shouldn't make inaccessible any other equipment except a switch. On the other hand, all cables are going from the bottom and you need to stretch them thru the whole rack. If you change the cables frequently (in the lab/development setup you do), it could be a headache.
Patch panels - to use or not to use. And if use, how? Usually people connect all incoming cables to the patch panel then use panel RJ sockets to route them inside a rack, I agree it's useful for the large installations. But we actually have about 8 cables going in/out, why don't connect it directly to the switches?
Use a short cables just to connect an equipment or make them longer to allow getting a servers out without a disconnect? A first choice will never cause a cable hell in the future, but it doesn't allow to get a servers out without powering them off. Remember, this is for the development lab, but putting some of the equipment offline (i.e. SAN) may cause the whole lab to go down up to the hour.
Probably enough for one question. Thanks for all answers.
UPS location
Bottom. Really. All that lead acid out-weighs a solid steel server any day. You want that in the bottom. Unlike servers, it'll get pulled out next to never. It'll provide stability in the bottom of the rack. Also by not putting all that weight at the top, your rack is less likely to act like a metronome in the case of some heavy rocking.
Switches vs. Patch Panels
Depends on your config, but for a 24U I'd lean towards switches. Or if possible, external patch-panels that then feed cables into your rack.
Cable Management Arms, or, what length of cable do you need
For 1U servers, I've stopped using the arms. I'll undress the back of the server if I need to pull it out (label your cables!). For 2U servers I have no firm opinion, but for larger server the arm makes sense.
Our data centre has a raised floor, with the void used for cold air; we also run the cabling in traywork under the floor. (I consider traywork to be the oft-neglected OSI Layer 0).
Although I'd put UPS and heavy kit near the bottom, I usually leave the bottom 2-3U empty, to make it easier to pass cables up, and to allow cold air to rise in racks where kit doesn't blow front-to-back properly (e.g. side-to-side). Don't forget to use blanking plates though to keep that hot/cold aisle separation working properly if the rack has mesh doors.
In terms of cabling, if I'm expecting a lot of switchports to be used, I'd consider running a panel-to-plug loom direct to the switch card, with each of the 24/48-ports numbered individually and the bundle labelled clearly, too. Then you can present that as a patch panel somewhere central and reduce the number of joins along the cable path.
I prefer cable management bars to be used 1:1 with patch panels; I'd usually place 2x24 port panels together with one cable management bar above and one below. Patch panels at the top of the rack as they are light.
All my labelling is as clear as I can make it, as I may not be on-site during an incident at 2am and want to reduce chances of problems after a random vendor engineer swaps hardware out.
Ziptie every cable where it shouldn't move, both at the server and at the rack. Use Velcro to bundle the extra cable length needed to run the server when it's pulled out.
UPS goes at the bottom. So do battery packs. Heavier servers to the bottom. Fill panels to the top.
Given increasing server densities, I would add a switch to the rack. Depending on your requirements vlans or separate switches for the management connections might be appropriate. Color code your network segments.
Cable management arms tend to get in the way. Go with Velcro.
Cables should be as long as needed to run the server when it is pulled out. Cables longer than that become a problem. If necessary provide zones to run out the extra length. Somewhere near the power distribution panel for power cords, and near the switch for data cables. Wider cabinets with wiring channels in sides are also an option.
I'm in agreement with the answers here so far. My only point of difference is in labeling cables: I don't and never have. I use cables color coded for the particular connection\host type and use a spreadsheet to track what connects to what. There's nothing I hate more than seeing a bunch of label tails hanging, cables mislabeled, running out of labels, labels coming off and littering the bottom of the rack, having to find the label maker, etc.
I know what's in my racks and I know how each device is connected. I don't need a label to tell me what's what. By the time I get to the label and read it I've already discovered what device\port it's connected to. Labels are useful for people who don't know what's in my racks... and those people will never have access to my racks... so no labels needed.
EDIT
I should state that I'm a one man operation (1 sysadmin - 50 servers). If I worked at a big shop with lots of equipment and personnel I might have a different opinion on labeling cables. I do keep my servers, switches, etc. labeled for the purpose of remote hands reboots and such.