So how do we do it? and why do we do it?
That's easy. How we do it is listed below. Why we do it, is pretty simple. It makes maintenance easier, troubleshooting easier, and it just looks like it's something that you can really be proud to show off.
Planning
This all starts as most things start, with an Excel spreadsheet. I find that a spreadsheet allows me to visualise exactly which RU the equipment will be loaded into. After all, it's better to shuffle machines around in a spreadsheet than it is once the servers are in place.
When finding a home for a piece of equipment, there's a couple of things i try to keep in mind. I try to load all of the equipment as close to the middle of the rack as i can with the most frequently accessed pieces (such as an LCD KVM) mounted closer to the middle so your not having to bend down or look up to use them. I will leave a space or two here and there for additional pieces of equipment but those with a lot of potential for expansion (such as a SAN) i will load at the edge. This of course means that I'll generally fill a rack with a maximum of two pieces of equipment which have a large potential for expansion. It's at this stage that a capacity plan could really come in handy. Unfortunately, i never seem to have one so i do my best with the information i have at the time and add an extra RU or two where i can.
Rack UPSs
When it comes to a rack UPS, i mount them at the bottom of the rack. Pretty simple really, they're usually quite heavy and by mounting them lower down they're easier to slide in and out. At least, that's what i think. Usually i mount these UPS systems by myself and i find it easy. With two people, it should be no problem.
Patch Panels / KVMs etc
I keep my patch panels, KVMs etc for smack bang right in the middle of the rack. Why? because it allows to me to purchase standard length cables which are readily available. Sometimes you forget to order enough, or new equipment is added after you've finished planning and it's more likely you'll have a few of these shorter cables just lying around.
Once all of the equipment locations are finalised, it's time to move onto the cabling. My spreadsheet tracks the numbers, lengths and colours of each cable that's connected to each piece of equipment. Once i'm done, i can tally up the totals ready for the purchasing. Don't think the spreadsheet just tracks network cables either. I also keep track of the quantity, length and colour of each power cable as well.
Purchasing
Server Rack
We replaced the existing server rack which was an old Dell PowerEdge rack with an APC AR3100 42U server rack. This also allows us to pre-cable and pre-load as much equipment as possible into the new server rack and simply unload the old rack, remove it from the area,
The other benefit was the fact that the APC had Zero U mounting points allowing us to mount PDUs on one side and cable management on the other.
Cable Management
We use APC Vertical Cable Organizers for vertical cable management. There's not much to be said but i highly recommend them.
At the top of the rack, we have a 600mm APC Cable Trough. The electricians use the cable trough mainly. Also, the the outlet boxes are mounted to the cable trough.
APC cable troughs, when connected together allow the server rack to be removed independently of the electrical. When the server rack was replaced, we needed electricians onsite to disconnect and relocate the outlets. With the outlets mounted on the cable trough, the electricians are no longer required when replacing the server rack. Below is what the top of the server rack used to look like and as you can see, there's no way around getting the electricians onsite without that cable trough.
We used a ton of velcro cable wraps for data cables and cable ties for power cables, SAS cables etc. Buy this stuff in bulk because you'll rip through the stuff quite quickly if you want your installation to look neat.
Also, replacing cable ties is annoying. Use them only where necessary.
PDUs
We used a pair of APC AP8959 Switched IP PDUs. In a 240v single phase environment with 20amp circuits these are probably the best for the job. In this environment, they'll handle up to 16 amps each which is enough to meet the CTO's needs.
One of the PDUs (the inner most one) is connected to a UPS while the other is simply connected directly to mains power.
KVM
We purchased a 16 port Dell 2162DS IP KVM. It's cheap and nasty but does the job. I have to say, it was my first time dealing with it and i probably don't want to re-use them ever again. Check out the marketing video. It played no part in selecting the KVM but here you go:
You might be wondering, what happens when you have more than 16 servers in a rack? Well, either you upgrade the KVM or you designate one SIP to be a 'floating' SIP to quickly connect to the new servers.
Before you ask why not just buy a bigger KVM? It wasn't an option and the CTO is quite happy to upgrade when necessary.
Rack UPS
Again we turn to Dell for a UPS. From memory it was a 1920w line interactive UPS. We also added an Extended battery module (EBM) and an network management card so it could be remotely managed and monitored. We also purchased a Dell temperature probe to keep track of the temperature within the rack.
While you might seem to think that this would be quite small, you need to remember that this UPS is connected to just one PDU and therefore only needs to power HALF the total power demand of the rack.
During post installation testing, we were at 1 hour of battery backup time for the entire server rack. This far exceeded the 20 minute requirement set by the CTO. Without the EBM though, we were at 15 minutes of battery backup time. The system can't be expanded beyond this capacity. You get what you pay for, nuff said.
Pictures Before The Rebuild
After The Rebuild
Although the picture doesn't show it, we wrapped the IEC leads connected to the outlets in the same colour as their feed cable (Red or Blue). This ensures you always know your dealing with the right power outlet.
You can also clearly see the status of those two outlets without opening the doors. This was something that wasn't possible with the old setup.
And this is the front of the server rack. In the end, we ditched the Dell covers on the front of the servers. I'm sure there's plenty of good reasons to do it such as airflow, better visibility of status lights etc. The real reason though was that they look ugly.
And then before and after. You'll also notice we scrapped the Dell cable management arms. I'm sure there's plenty of good reasons to do it such as airflow, better visibility of status lights etc. The real reason though, well, you know why.
And that's how you rebuild a server rack, costs a little bit but so much easier to work on than what was there before.
IITG
No comments:
Post a Comment